He doesn’t list what the mistakes will be. He said that he fears that because hardware people aren’t software people, that they will make the same mistakes that x86 made, which were then made by Arm later.
He did mention that fixing those mistakes was faster for Arm than x86, so that brings hope that fixing the mistakes on Risc V will take less time
No, it was about the prediction engines that contain security vulnerabilities. Problem is that software has no control over that, because hardware does future predictions for performance optimization.
Prediction is a hard problem when coupled with caches. It relatively easy to say that no speculative instruction has any effect until it’s confirmed taken if you ignore caches. However caches need to fetch information from memory to allow an instruction to evaluate, and rewinding a cache to it’s previous state on a mispredict is almost impossible. Especially when you consider that the amount of time you’re executing non-speculative code on a modern processor is very low.
Not having predictions is consigning yourself to 1990s performance, with faster clocks.
Not OP, but consider using FUTO Keyboard. It’s made by the group Louis Rossmann works with, and it has offline speech to text (no sending data to Google), swipe keyboard, and completions. It’s also source-available, which isn’t as good as open source, but you could examine the code and verify their claims if you wanted to.
I’m using it and, while it’s not perfect, it’s way better than the open source Android keyboards with swiping that I’ve tried.
Thanks, will try it out! I need an emoji picker though. Does it have that?
Edit: typing with it now. It had an emoji picker. 👍
I like the picker’s grouping, actually. Body parts (hands) are closer to faces.
The recent emoji section doesn’t work.
It doesn’t have the latest emoji set, as far as I can tell.
The swiping is much more sensitive than Gboard. I’m not a fan as of yet. Maybe it’s still learning. Seems like it can’t handle the speed as well as Gboard can.
Prediction suggestions are terrible so far.
I don’t like that swipe delete doesn’t delete whole words.
All in all, I don’t think I can recommend it in its current state.
But, if you type by pressing buttons, the predictions are actually pretty good. Maybe that saves a bit of time if you’re very stationary and not on the move.
Yeah, it’s very much alpha software, but it works surprisingly well for being in such an early state. I’m using it as my keyboard now, and it works well enough, but certainly not perfect.
Then again, I’m willing to deal with a lot of nonsense to avoid Google, so YMMV.
I hear the speech to text is pretty good. I haven’t tried it (I hate dictation), but maybe you could give it a whirl before you give up on it, it’s supposed to be its killer feature.
Instruction creep maybe? Pretty sure I’ve also seen stuff that seems to show that Torvalds is anti-speculative-execution due to its vulnurabilities, so he could also be referring to that.
Counterintuitive but more instructions are usually better. It enables you (but let’s be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.
I understand some instruction expansions today are used to good effect in x86, but that there are also a sizeable number of instructions that are rarely utilized by compilers and are mostly only continuing to exist for backwards compatibility. That does not really make me think “more instructions are usually better”. It makes me think “CISC ISAs are usually bloated with unused instructions”.
My whole understanding is that while more specific instruction options do provide benefits, the use-cases of these instructions make up a small amount of code and often sacrifice single-cycle completion. The most commonly cited benefit for RISC is that RISC can complete more work (measured in ‘clockcycles per program’ over ‘clockrate’) in a shorter cyclecount, and it’s often argued that it does so at a lower energy cost.
I imagine that RISC-V will introduce other standards in the future (hopefully after it’s finalized the ones already waiting), hopefully with thoroughly thought out instructions that will actually find regular use.
I do see RISC-V proponents running simulated benchmarks showing RISC-V is more effective. I have not seen anything similar from x86 proponents, who usually either make general arguments, or worse , just point at the modern x86 chips that have decades of research, funding, and design behind them.
Overall, I see alot of doubt that ISAs even matter to performance in any significant fashion, and I believe it for performance at the GHz/s level of speed.
Anyone willing to summarize those mistakes here, for those who can’t watch the video rn?
He doesn’t list what the mistakes will be. He said that he fears that because hardware people aren’t software people, that they will make the same mistakes that x86 made, which were then made by Arm later.
He did mention that fixing those mistakes was faster for Arm than x86, so that brings hope that fixing the mistakes on Risc V will take less time
I think it was something with instruction sets? Pretty sure i read something about this months ago.
No, it was about the prediction engines that contain security vulnerabilities. Problem is that software has no control over that, because hardware does future predictions for performance optimization.
Aah, right, that.
Prediction is a hard problem when coupled with caches. It relatively easy to say that no speculative instruction has any effect until it’s confirmed taken if you ignore caches. However caches need to fetch information from memory to allow an instruction to evaluate, and rewinding a cache to it’s previous state on a mispredict is almost impossible. Especially when you consider that the amount of time you’re executing non-speculative code on a modern processor is very low.
Not having predictions is consigning yourself to 1990s performance, with faster clocks.
I mean, that’s all chip architectures are, so yes.
Basically, his concern is that if they are not cooperating with software engineers that the product won’t be able to run AAA games.
It’s more of a warning than a prediction.
What are “AAA turns”?
Sorry, AAA games. I was swiping on my keyboard and didn’t see the mistake.
SwiftKey?
Not OP, but consider using FUTO Keyboard. It’s made by the group Louis Rossmann works with, and it has offline speech to text (no sending data to Google), swipe keyboard, and completions. It’s also source-available, which isn’t as good as open source, but you could examine the code and verify their claims if you wanted to.
I’m using it and, while it’s not perfect, it’s way better than the open source Android keyboards with swiping that I’ve tried.
Thanks, will try it out! I need an emoji picker though. Does it have that?
Edit: typing with it now. It had an emoji picker. 👍
All in all, I don’t think I can recommend it in its current state.
But, if you type by pressing buttons, the predictions are actually pretty good. Maybe that saves a bit of time if you’re very stationary and not on the move.
Yeah, it’s very much alpha software, but it works surprisingly well for being in such an early state. I’m using it as my keyboard now, and it works well enough, but certainly not perfect.
Then again, I’m willing to deal with a lot of nonsense to avoid Google, so YMMV.
I hear the speech to text is pretty good. I haven’t tried it (I hate dictation), but maybe you could give it a whirl before you give up on it, it’s supposed to be its killer feature.
Yeah, I will admit that, too. Very good for alpha software. 👍
Giving it a whirl right now. Thanks for the recommendation.
I’ll give it a shot. I’m using Google
Google in this case. I’ll try the alternative mentioned
Instruction creep maybe? Pretty sure I’ve also seen stuff that seems to show that Torvalds is anti-speculative-execution due to its vulnurabilities, so he could also be referring to that.
Counterintuitive but more instructions are usually better. It enables you (but let’s be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.
I understand some instruction expansions today are used to good effect in x86, but that there are also a sizeable number of instructions that are rarely utilized by compilers and are mostly only continuing to exist for backwards compatibility. That does not really make me think “more instructions are usually better”. It makes me think “CISC ISAs are usually bloated with unused instructions”.
My whole understanding is that while more specific instruction options do provide benefits, the use-cases of these instructions make up a small amount of code and often sacrifice single-cycle completion. The most commonly cited benefit for RISC is that RISC can complete more work (measured in ‘clockcycles per program’ over ‘clockrate’) in a shorter cyclecount, and it’s often argued that it does so at a lower energy cost.
I imagine that RISC-V will introduce other standards in the future (hopefully after it’s finalized the ones already waiting), hopefully with thoroughly thought out instructions that will actually find regular use.
I do see RISC-V proponents running simulated benchmarks showing RISC-V is more effective. I have not seen anything similar from x86 proponents, who usually either make general arguments, or worse , just point at the modern x86 chips that have decades of research, funding, and design behind them.
Overall, I see alot of doubt that ISAs even matter to performance in any significant fashion, and I believe it for performance at the GHz/s level of speed.
This is probably correct.