deleted by creator
deleted by creator
Kernel modules don’t have to be open source provided they follow certain rules like not using gpl only symbols. This is the same reason you can use an NVIDIA driver.
Its not enforced so much by law as what the fsf and Linux foundation can prove and are willing to pursue; going after a company that size is expensive, especially when they’re a Linux foundation partner. A lot of major Linux foundation partners are actively breaking the GPL.
Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.
I work with SoC suppliers, including Qualcomm and can confirm; you need to sign an NDA to get a highly patched old orphaned kernel, often with drivers that are provided only as precompiled binaries, preventing you updating the kernel yourself.
If you want that source code, you need to also pay a lot of money yearly to be a Qualcomm partner and even then you still might not have access to the sources for all the binaries you use. Even when you do get the sources, don’t expect them to be updated for new kernel compatibility; you’ve gotta do that yourself.
Many other manufacturers do this as well, but few are as bad. The environment is getting better, but it seems to be a feature that many large manufacturers feel they can live without.
Yeah in the short term there are going to be a lot of lose/lose scenarios for them, but this is the stupid prize for playing stupid games with what they released.
I hope they stock it out, games like No Man’s Sky show both that a developer who cares enough to try can earn back the trust of a player base, and that the process to do so requires a lot of work.
deleted by creator
No, I’m saying that when people run into strange bugs, sometimes they put together an issue (like the person behind cve-rs), and sometimes they quietly work around it because they’re busy.
Seeing as I don’t often trawl through issues on the language git, neither really involve notifying me specifically.
My lack of an anecdote does not equate to anecdotal evidence of no issue, just that I haven’t met every rust developer.
Yes, the problems rust is solving are already solved under different constraints. This is not a spicy take.
The world isn’t clamoring to turn a go app into rust specifically for the memory safety they both enjoy.
Systems applications are still almost exclusively written in C & C++, and they absolutely do run into memory bugs. All the time. I work with C almost exclusively for my day job (with shell and rust interspersed), and while tried and tested C programs have far fewer memory bugs than when they were first made, that means the bugs you do find are by their nature more painful to diagnose. Eliminating a whole class of problems in-language is absolutely worth the hype.
If someone did, why would I hear of it?
The code used in cve-rs is not that complicated, and it’s not out of the realm of possibility that somebody would use lifetimes like this if they had just enough knowledge to be dangerous.
I’m as much a rust evangelist as the next guy, but part of having excellent guard rails is loudly pointing out subtle breakages that can cause hard to diagnose issues.
Strangely, I’ve never had a crime issue in CS2, even with 140k+ pop and barely any cop shops.
I recently bought a 7800 XT for the same reason, NVIDIA drivers giving me trouble in games and generally making it harder to maintain my system. Unfortunately I ran headfirst into the 6.6 reset bug that made general usage an absolute nightmare.
Open source drivers are still miles ahead of NVIDIA’s binary blob if only because I could shift to 6.7 when it released to fix it, but I guess GPU drivers are always going to be GPU drivers.
I’m sure the developers are competent, but the reason I care about the design decisions is the same reason the electric brakes on cars don’t interface with its infotainment system; the interface inherently creates opportunities for out of spec behaviour and even if the introduced risk is tiny, the consequence is so bad that it’s worth avoiding.
If you have to have an airbag be controlled by software (ideally the mechanism is physical, like a pull tab), it should be an isolated real time device with monitoring your accelerometer and triggering the airbag be it’s only jobs. If it’s also waiting to hear back from another device about whether your subscription ran out before it starts checking, the risk of failure also has to consider that triggering device.
It can be done perfectly, but it’s software so of course it has bugs.
That information changes none of my issues; if you don’t see the plethora of potential implementation bugs involved, either you don’t code professionally or you shouldn’t be.
Yes, but also from an implementation perspective: if I’m making code that might kill somebody if it fails, I want it to be as deterministic and simple as possible. Under no circumstances do I want it:
Typically no, the top two PCIE x16 slots are normally directly to the CPU, though when both are plugged in they will drop down to both being x8 connectivity.
Any PCIE x4 or X1 are off the chipset, as well as some IO, and any third or fourth x16 slots.
So yes, motherboards typically do implement more IO connectivity than can be used simultaneously, though they will try to avoid disabling USB ports or dropping their speed since regular customers will not understand why.
If it’s a G502/702, they’ve got a very fucky scroll wheel & middle click; it’s actually a lemon, but since nothing else works with the wireless pads they’re the only options.