If you were alive during the 90s or 2000s, you surely remember that tune. It’s the anthem of Intel, the world’s most dominant chip maker, or at least, they were. Since then, Intel has had a pretty rough fall from glory. In fact, today, Intel barely ranks in the top 10 when it comes to the world’s largest chip makers. They come in behind Nvidia, TSMC, Broadcom, Samsung, ASML, AMD, Qualcomm, Applied Materials, and Texas Instruments.And when you contextualize this with Nvidia’s performance, things
Why? AMD doesn’t make phone chips, yet they’re dominating Intel. Likewise for NVIDIA, who is at the top of the chip maker list.
The problem isn’t what market segments they’re in, the problem is that they’re not dominant in any of them. AMD is better at high end gaming (X3D chips especially), workstations (Threadripper), and high performance servers (Epyc), and they’re even better in some cases with power efficiency. Intel is better at the low end generally, by that’s not a big market, and it’s shrinking (e.g. people moving to ARM). AMD has been chipping away at those, one market segment at a time.
Intel entering phones will end up the same way as them entering GPUs, they’ll have to target the low end of the market to get traction, and they’re going to have a lot of trouble challenging the big players. Also, x86 isn’t a good fit there, so they’ll also need to break into the ARM market a well.
No, what they need is to execute well in the spaces they’re already in.
When AMD introduced the first Epyc, they marketed it with the slogan: “Nobody ever got fired for buying Intel. Until now.”
And they lived up to the boast. The Zen architecture was just that good and they’ve been improving on it ever since. Meanwhile the technology everyone assumed Intel had stored up their sleeve turned out to be underwhelming. It’s almost as bad as IA-64 vs. AMD64 and at least Intel managed to recover from that one fairly quickly.
They really need to come to with another Core if they want to stay relevant.
Actually, AMD do make phone chips. That is, they design the Exynos GPUs, which are inside some Samsung devices
Huh, TIL. It looks like they basically put Radeon cores into it.
Removed by mod
Yes, and 5 years ago, they had very little of it. I’m talking about the trajectory, and AMD seems to be getting the lion’s share of new sales.
Removed by mod
I don’t really see ARM as having an inherent advantage. The main reason Apple’s ARM chips are eating x86’s lunch is because Apple has purchased a lot of capacity on the next generation nodes (e.g. 3nm), while x86 chips tend to ship on older nodes (e.g. 5nm). Even so, AMD’s cores aren’t really that far behind Apple’s, so I think the node advantage is the main indicator here.
That said, the main advantage ARM has is that it’s relatively easy to license it to make your own chips and not involve one of the bigger CPU manufacturers. Apple has their own, Amazon has theirs, and the various phone manufacturers have their own as well. I don’t think Intel would have a decisive advantage there, since companies tend to go with ARM to save on costs, and I don’t think Intel wants to be in another price war.
That’s why I think Intel should leverage what they’re good at. Make better x86 chips, using external fabs if necessary. Intel should have an inherent advantage in power and performance since they use monolithic designs, but those designs cost more than AMD’s chiplet design. Intel should be the premium brand here, with AMD trailing behind, but their fab limitations are causing them to trail behind and jack up clock speeds (and thus kill their power efficiency) to stay competitive.
In short, I really don’t think ARM is the right move right now, unless it’s selling capacity at their fabs. What they need is a really compelling product, and they haven’t really delivered one recently…
It’s not just node, it’s also the design. If I remember properly, ARM has constant instruction length which helps a lot with caching. Anyway, Apple’s M CPUs are still way better when it comes to perf/power ratio.
Yes, it’s certainly more complicated than that, but the lithography is a huge part since they can cram more transistors into a smaller area, which is critical for power savings.
I highly doubt instruction decoding is a significant factor, but I’d love to be proven wrong. If you know of a good writeup about it, I’d love to read it.
Here, I’ve found an article. M CPUs are also much more integrated, like a SoC, which gives them further performance/power advantages sacrificing extensibility.
Removed by mod