NVIDIA’s marketing overhypes, but their technical papers tend to be very solid. Obviously it always pays to remain skeptical but they have a good track record in this case.
Formerly /u/Zalack on Reddit.
NVIDIA’s marketing overhypes, but their technical papers tend to be very solid. Obviously it always pays to remain skeptical but they have a good track record in this case.
Precision for what? Knowing their cron job will fire? Knowing what was wrong with the commands they sent? Neither of those are crazy precise or ambiguous statements?
The only highly precise thing that needs to happen is the alignment of the antenna but that system has been working for decades already and has been thoroughly tested.
NASA tends to be pretty straightforward when talking about risks, and if they feel like all the systems are in working order and there’s a good chance we’ll be back in contact with it, I think it’s worth talking them at their word.
Like yeah, it’s impressive they can aim an antenna that precisely, but using stars to orient an object is a very very well understood geometry problem. NASA has been using that technique at least as far back as Apollo
Lol. The knee-jerk contrarianism online really gets under my skin, especially when it’s towards experts.
Like yeah, sometimes experts are wrong or systems don’t behave as expected. But framing that as some sort of erudite insight really bugs me.
“I hope the recovery system works!” doesn’t need to be rewritten as “Mmm yes. But what these engineers haven’t considered is the possibility that they are wrong”.
This is one of those things that sounds meaningful, but can be said about literally any problem in any system. Not all knowledge requires the same level of precision for confidence.
If the engineers at NASA who are familiar with the system say this is a known error state that will be fixed the next time the system designed to correct it fires on its set schedule, there’s not a whole lot added by saying sure, but what if they’re wrong?
It’s just restating the table stakes of existence.
Can’t believe Hot Fuzz hasn’t been mentioned yet.
I’m a developer and don’t hate it on its face.
IMO it’s only a problem in the context of iOS not having side-loading. I’m imagining an app that uses an API to block ads and Apple just being like “no” and then you can’t get that app.
It’s worth pointing out that reproducible builds aren’t always guaranteed if software developers aren’t specifically programming with them in mind.
imagine a program that inserts randomness during compile time for seeds. Reach build would generate a different seed even from the same source code, and would fail being diffed against the actual release.
Or maybe the developer inserts information about the build environment for debugging such as the build time and exact OS version. This would cause verification builds to differ.
Rust (the programing language) has had a long history of working towards reproducible builds for software written in the language, for instance.
It’s one of those things that sounds straightforward and then pesky reality comes and fucks up your year.
IMO, it’s always better to try. Worst case scenario is that nothing changes, so no worse than if you didn’t. The only sane choice in that kind of situation is to pick the one with a chance for improvement.
In my experience, giving a shit about what you’re doing has a bunch of positing knock-on affects as well. You just end up feeling better about yourself. In your specific scenario it sounds like trying would also afford you the opportunity to live a happier life, and that’s worth chasing. The world is fucked, but scientists keep saying they if we act soon it’s not so fucked they we’re past the inflection point to un-fuck it.
It’s not that strange. A timeout occurs on several servers overnight, and maybe a bunch of Lemmy instances are all run in the same timezone, so all their admins wake up around the same time and fix it.
Well it’s a timeout, so by fixing it at the same time the admins have “synchronized” when timeouts across their servers are likely to occur again since it’s tangentially related to time. They’re likely to all fail again around the same moment.
It’s kind of similar to the thundering herd where a bunch of things getting errors will synchronize their retries in a giant herd and strain the server. It’s why good clients will add exponential backoff AND jitter (a little bit of randomness to when the retry is done, not just every x^2 seconds). That way if you have a million clients, it’s less likely that all 1,000,000 of them will attempt a retry at the extract same time, because they all got an error from your server at the same time when it failed.
Edit: looked at the ticket and it’s not exactly the kind of timeout I was thinking of.
This timeout might be caused by something that’s loosely a function of time or resources usage. If it’s resource usage, because the servers are federated, those spikes might happen across servers as everything is pushing events to subscribers. So, failure gets synchronized.
Or it could just be a coincidence. We as humans like to look for patterns in random events.
Agreed, I grew up in a very conservative area and was pretty homophobic when I started college.
“They can do whatever they want, just don’t ask me to like it” was an important stepping stone towards “oh shit, love is love” and finally actually listening to the experience of gay people.
I didn’t even consider the fact that the fediverse offers us the ability to start having publicly owned social media and government-run instances for direct communication.
That could be very interesting…
That’s how I wake up in the morning