Those young machine spirits need their rest
Those young machine spirits need their rest
One upvote is not enough.
I once wrote a commit message the length of a full blog post comparing 10 different alternatives for micro optimization, with benchmarks and more. The diff itself was ten lines. Shaved around 4% off the hot path (based on a sampling profiler that ran over the weekend).
Ew no.
Abusing language features like this (boolean expression short circuit) just makes it harder for other people to come and maintain your code.
The function does have opportunity for improvement by checking one thing at a time. This flattens the ifs and changes them into proper sentry clauses. It also opens the door to encapsulating their logic and refactoring this function into a proper validator that can return all the reasons a user is invalid.
Good code is not “elegant” code. It’s code that is simple and unsurprising and can be easily understood by a hungover fresh graduate new hire.
Yes. I’ll read the content, but I try to avoid interacting.
Mind you, db0 himself is a tankie, although he doesn’t seem to insist on imposing that on the users or communities on his instance.
EDIT: I stand corrected. Apologies to db0 for lumping him in with that crowd.
Gotcha. So all horses are purple?
Most AIs are trained on older poster art like this - they’re well labelled, have consistent style, and because they’re older there are likely to be a bunch of duplicates in the training set.
Pretty sure this one predates AI art.
It’s from 1986
None built in from what I recall. That was from back in 2011, so it’s possible things changed since.
Reading through, it looks like retries do exist, but remember that duplicate packets are treated as a window reset, so it’s possible that transmission succeeded but the ack was lost.
I remember the project demos from the course though - one team implemented some form of fast retry on two laptops and had one guy walk out and away. With regular wifi he didn’t even make it to the end of the hall before the video dropped out. With their custom stack he made it out of the building before it went.
I’ll need to dig through to find the name of what they did.
To be fair, because of window size management it only takes 1% packet loss to cause a catastrophic drop in speed.
Packet loss in TCP is only ever handled as a signal of extreme network congestion. It was never intended to go over a lossy link like wifi.
Only on signup
Anything using Blind as a “verified industry source” is going to be skewed to the type of person who uses Blind. Beyond that, it’s low sample size, and there are suspiciously round fractions for some of the larger companies. Worse, because Blind is blind - this doesn’t represent current employees, but merely people who worked at some point in the past at those companies.
Not saying it’s not good - just saying not to get overly excited over a badly done survey
That makes a lot of sense - I wonder if they also do the SIGSEGV trick like HotSpot to know when they need to JIT the next chunk of instructions
But does it run Doom? Using CMOV instructions only?
I thought FAT binaries don’t work like that - they included multiple instruction sets with a header pointing to the sections (68k, PPC, and x86)
Rosetta to the best of my understanding did something similar - but relied on some custom microcode support that isn’t rooted in ARM instructions. Do you have a link that explains a bit more in depth on how they did that?
From what I’ve understood of this - it’s transpiling the x86 code to ARM on the fly. I honestly would have thought it wasn’t possible but hearing that they’re doing it - it will be a monumental effort, but very feasible. The best part is that once they’ve gotten CRT and cdecl instructions working - actual application support won’t be far behind. The biggest challenge will likely be inserting memory barriers correctly - a spinlock implemented in x86 assembly is highly unlikely to work correctly without a lot of effort to recognize and transpile that specific structure as a whole.
I have worked remotely on and off for years. Having a physical separation between the space where I work and the space where I play is an absolute must.
Beyond that - the hardware needs for development and gaming are wildly different. If you want something that’s going to be good at both, you’re going to either going to have to spend a lot of money or compromise heavily on quality.
I’d strongly recommend against it. Nothing to do with specs or viability but psychologically you’ll want to play games - they’re enjoyable. You can work around that in a few ways: only use the keyboard/mouse for dev work, only play games outside the workroom, etc. it will still take a lot of self discipline, but it’s nothing compared to having a different OS, physical machine, etc.
In terms of specs - if it can run vs code, you can use the remote development plugins to run things on a beefier computer if you do heavy data work, etc. I don’t know if it will do video editing though.
Seriously though - JIRA isn’t always a massive pain in the ass. It’s just the way it’s used that sucks. Workflow restrictions so devs can’t move tickets from testing back to in progress, dozens of mandatory fields, etc.
When your tools start dictating your workflow rather than the other way around then it’s time to switch tools.
You think it’s out of hand now?
Just wait until 9am Moscow Standard Time on monday morning. It’ll take a little bit of time for them to drink their coffee and have the morning meeting to figure out their talking points. The smarter ones will wait until 9am eastern before they start posting.