Because electric cars were a relatively new concept that needed to be designed and prototyped. That’s a job done by engineers. Factory workers don’t really come in until mass production, after the engineering is done.
Because electric cars were a relatively new concept that needed to be designed and prototyped. That’s a job done by engineers. Factory workers don’t really come in until mass production, after the engineering is done.
Wow, do this many people really not get the reference? I thought it was funny
Someone already suggested bringing it to the cops earlier in this thread
You’re misinterpreting what I said and conflating two separate scenarios in your 2nd statement. I didn’t say anything about the system warning “for a few seconds before shutting down” in the event of an eminent collision. It warns the driver before shutting down if the driver fails to hold the steering wheel during normal driving conditions.
The warnings were worthless because the driver kept responding to them just before they timed out and shut autopilot down. It would be even worse if the car immediately pulled off the road and stopped in traffic without warning the driver first.
They aren’t subtle either, after failing to touch the wheel for about 5-10 seconds it starts beeping loudly and flashing an icon on the screen.
This is not a case of autopilot causing an accident, this is a case of an impaired driver operating a vehicle when they should not have been. If the driver was using standard cruise control, would we be blaming the vehicle because their foot wasn’t touching the accelerator when the accident happened? No, we wouldn’t.
I have to say this is extremely inaccurate imo. Self driving takes over the menial tasks of keeping the car in the lane, watching the speed, etc. and allows an attentive driver to focus on more high level tasks like looking at the road ahead, watching the sides of the road for potential hazards, and keeping more aware of their blind spots.
Just because the feature can be abused does not inherently make it unsafe. A drunk driver can use cruise control to more accurately control the vehicle’s speed and avoid a ticket, does that make it a bad feature? I wouldn’t say so.
Autopilot and other driver assist systems are good when used responsibly and cautiously. It’s frustrating to see people cause an accident after misusing the system and blame the technology instead. This is why we can’t have nice things.
I’m not even replying to the article or the original commenter. I’m replying to the person that said “why doesn’t the car slow down and stop when the warnings are ignored?” which is precisely what it does.
I’m far from a Tesla fanboy, and there is no shortage of valid criticisms against Tesla. However, misrepresenting what autopilot does in the event of a forced disengagement isn’t right either.
This is literally exactly how it works already. The driver must have been pulling on the steering wheel right before it gave him a strike. The system will warn you to pay attention for a few seconds before shutting down. Here’s a video: https://youtu.be/oBIKikBmdN8
This is what it does already: https://youtu.be/oBIKikBmdN8
I’ve always mounted network shares in fstab, what’s the benefit to doing it with systemd?
(Also, for those of you learning, this method only works on systemd-based distros)
Or as I’ve discovered recently while troubleshooting local infrastructure, the ARP table. Essentially the DNS of IP addressing
This is fucking stupid imo. We have more than enough land area for solar as it is. Why would you add 100x the complexity to your solar plant when you can just build it on land? Now you have to deal with tides, salt water corrosion, your technicians have to be scuba divers or something, running transmission lines through salt water is much harder than the ground. What happens when there’s an electrical fault that kills a bunch of people because they’re submerged in highly conductive salt water?
People are acting like ChatGPT is storing the entire Harry Potter series in its neural net somewhere. It’s not storing or reproducing text in a 1:1 manner from the original material. Certain material, like very popular books, has likely been interpreted tens of thousands of times due to how many times it was reposted online (and therefore how many times it appeared in the training data).
Just because it can recite certain passages almost perfectly doesn’t mean it’s redistributing copyrighted books. How many quotes do you know perfectly from books you’ve read before? I would guess quite a few. LLMs are doing the same thing, but on mega steroids with a nearly limitless capacity for information retention.