The annoying thing is I have had people claim that 8GB and 16GB is fine on Apple and works better than on PC laptops. To the point one redditor point blank refused to believe I owned an Apple laptop. I literally had to take a photograph of said laptop and show it to them before they would believe me about the RAM capacity.
Apple helped spread that misinformation. So why would I hold some of their stock if I am trying to counter it?
No, I want companies to stop spreading this bullshit and for people to stop falling for it. I don’t hold any stocks at all. In fact that kind of bullshit I am fairly against.
They’re just popular ETFs which contain a lot of $AAPL. I was just commenting that even if someone doesn’t explicitly hold any $AAPL, if they own ETFs/mutual funds, they are likely exposed to $AAPL.
Doesn’t apply to you though since you said you don’t own any stock :)
I own a 8GB MacBook Pro for work, it’s definitely better than a PC with 8GB of RAM, but not better or even close to a PC with 16GB. Just the amount of stutters/freezes while the swap file goes is insane
Maybe this is true if you use Windows. If you use Linux on your PC versus macOS on a MacBook you will probably find the PC performs comparably if not better.
For me in particular I’m a software developer who works on developer tools, so I have a lot of tests running in VMs so I can test on different operating systems. I just finished running a test suite that used up over 50 gigs of RAM for a dozen VMs.
I use Virtual Machines and run local LLMs. LLMs need VRAM rather than CPU RAM. You shouldn’t be doing it on a laptop without a serious NPU or GPU, if at all. I don’t know if I will be using VMs heavily on this machine or not, but that would be a good reason to have more RAM. Even so 32 GiB should be enough for a few VMs running concurrently.
Honestly, I think that for many people, if they’re using a laptop or phone, doing LLM stuff remotely makes way more sense. It’s just too power-intensive to do a lot of that on battery. That doesn’t mean not-controlling the hardware – I keep a machine with a beefy GPU connected to the network, can use it remotely. But something like Stable Diffusion normally requires only pretty limited bandwidth to use remotely.
If people really need to do a bunch of local LLM work, like they have a hefty source of power but lack connectivity, or maybe they’re running some kind of software that needs to move a lot of data back and forth to the LLM hardware, I think I might consider lugging around a small headless LLM box with a beefy GPU and a laptop, plug the LLM box into the laptop via Ethernet or whatnot, and do the LLM stuff on the headless box. Laptops are just not a fantastic form factor for heavy crunching; they’ve got limited ability to dissipate heat and tight space constraints to work with.
Yeah it is easier to do it on a desktop or over a network. That’s what I was trying to imply. Although having an NPU can help. Regardless I would rather be using my own server than something like ChatGPT.
Any memory that’s going unused by apps is going to be used by the OS for caching disk contents. That’s not as significant with SSD as with rotational drives, but it’s still providing a benefit, albeit one with diminishing returns as the size of the cache increases.
That being said, if this is a laptop and if you shut down or hibernate your laptop on a regular basis, then you’re going to be flushing the memory cache all the time, and it may buy you less.
IIRC, Apple’s default mode of operation on their laptops these days is to just have them sleep, not hibernate, so a Mac user would probably benefit from that cache.
Outside of storage servers and ZFS no one is buying RAM specifically to use it as disk cache. You will also find that Windows laptops are also designed to be left in sleep rather than hibernate.
Ironically, it’s the other way around, since Apple has to share their RAM between GPU and CPU, where other computers typically have them separately.
So in normal usage with 8 GB, you’re automatically down to 7, since at least 1GB would be taken by the graphics card. More if you’re doing anything reasonably graphics-heavy with it.
Whoa, that’s like 32GB of Windows RAM. Seems excessive to me tbh
The annoying thing is I have had people claim that 8GB and 16GB is fine on Apple and works better than on PC laptops. To the point one redditor point blank refused to believe I owned an Apple laptop. I literally had to take a photograph of said laptop and show it to them before they would believe me about the RAM capacity.
You should have said “sure buddy” and ignored them.
The problem is they will then keep spreading misinformation.
And that’s your problem? Do you hold $AAPL
Apple helped spread that misinformation. So why would I hold some of their stock if I am trying to counter it?
No, I want companies to stop spreading this bullshit and for people to stop falling for it. I don’t hold any stocks at all. In fact that kind of bullshit I am fairly against.
…or $SPY, or $QQQ, or…
What even are those?
They’re just popular ETFs which contain a lot of $AAPL. I was just commenting that even if someone doesn’t explicitly hold any $AAPL, if they own ETFs/mutual funds, they are likely exposed to $AAPL.
Doesn’t apply to you though since you said you don’t own any stock :)
I own a 8GB MacBook Pro for work, it’s definitely better than a PC with 8GB of RAM, but not better or even close to a PC with 16GB. Just the amount of stutters/freezes while the swap file goes is insane
Maybe this is true if you use Windows. If you use Linux on your PC versus macOS on a MacBook you will probably find the PC performs comparably if not better.
Oh totally, Linux is in the same ballpark as, if not better than, Macs when it comes to RAM usage. Windows is just a hog
Until you open a web browser or an Electron app. Them folks don’t really seem to give a shit about RAM usage.
We are talking about PC vs Mac. Both have the same problem when it comes to chromium based things.
My Linux machine has 64 GiB of RAM, which is like 128 GiB of Mac RAM. It’s still not enough
Serious question what are you using all that RAM for? I am having a hard time justifying upgrading one of my laptops to 32 GiB, nevermind 64 GiB.
For me in particular I’m a software developer who works on developer tools, so I have a lot of tests running in VMs so I can test on different operating systems. I just finished running a test suite that used up over 50 gigs of RAM for a dozen VMs.
Same, 48c/96t with 192gb ram.
make -j is fun, htop triggers epilepsy.
Few vms, but tons of Lxc containers, it’s like having 1 machine that runs 20 systems in parallel and really fast.
Have containers for dev, for browsing, for wine, the dream finally made manifest.
If games, modding uses a lot. It can go to the point of needing more than 32gb, but rarely so.
Usually, you’d want 64gb or more for things like video editing, 3d modeling, running simulations, LLMs, or virtual machines.
I use Virtual Machines and run local LLMs. LLMs need VRAM rather than CPU RAM. You shouldn’t be doing it on a laptop without a serious NPU or GPU, if at all. I don’t know if I will be using VMs heavily on this machine or not, but that would be a good reason to have more RAM. Even so 32 GiB should be enough for a few VMs running concurrently.
That’s fair. I’ve put it there as more of a possible use case rather than something you should be consistently doing.
Although iGPU can perform quite well when given a lot of RAM, afaik.
Honestly, I think that for many people, if they’re using a laptop or phone, doing LLM stuff remotely makes way more sense. It’s just too power-intensive to do a lot of that on battery. That doesn’t mean not-controlling the hardware – I keep a machine with a beefy GPU connected to the network, can use it remotely. But something like Stable Diffusion normally requires only pretty limited bandwidth to use remotely.
If people really need to do a bunch of local LLM work, like they have a hefty source of power but lack connectivity, or maybe they’re running some kind of software that needs to move a lot of data back and forth to the LLM hardware, I think I might consider lugging around a small headless LLM box with a beefy GPU and a laptop, plug the LLM box into the laptop via Ethernet or whatnot, and do the LLM stuff on the headless box. Laptops are just not a fantastic form factor for heavy crunching; they’ve got limited ability to dissipate heat and tight space constraints to work with.
Yeah it is easier to do it on a desktop or over a network. That’s what I was trying to imply. Although having an NPU can help. Regardless I would rather be using my own server than something like ChatGPT.
Photo Editing, Video Transcoding.
Any memory that’s going unused by apps is going to be used by the OS for caching disk contents. That’s not as significant with SSD as with rotational drives, but it’s still providing a benefit, albeit one with diminishing returns as the size of the cache increases.
That being said, if this is a laptop and if you shut down or hibernate your laptop on a regular basis, then you’re going to be flushing the memory cache all the time, and it may buy you less.
IIRC, Apple’s default mode of operation on their laptops these days is to just have them sleep, not hibernate, so a Mac user would probably benefit from that cache.
Outside of storage servers and ZFS no one is buying RAM specifically to use it as disk cache. You will also find that Windows laptops are also designed to be left in sleep rather than hibernate.
k8s
Does it?
Previous benchmarks have shown the 8 GB models seriously fell behind in performance.
Ironically, it’s the other way around, since Apple has to share their RAM between GPU and CPU, where other computers typically have them separately.
So in normal usage with 8 GB, you’re automatically down to 7, since at least 1GB would be taken by the graphics card. More if you’re doing anything reasonably graphics-heavy with it.
Is it like SI RAM vs US Customary RAM?
Yes. Freedom RAM equals approx. 1.6 metric RAMs. Unless your computer is on water, in which case it’s 1.857
Is that calibrated against the Universal Prototype Kilobyte in Paris?
iirc that one is outdated as it’s 1024 bytes. They haven’t been able to shave off the extra 24 bytes
AI Models needs that RAM to work