Which also has the additional benefit for homeowners of local backup power in the case of a blackout :)
Which also has the additional benefit for homeowners of local backup power in the case of a blackout :)
People do use “wokeness” to describe forced/pandering diversity, but I’ve never seen that to describe just people who happen to be black or a woman in games. Sadly can agree though that it happens with gay/trans characters-
The problem is, any new diverse characters/media is immediately treated as “forced.” It happens less with race than it does sexuality/gender, since sexuality and gender are generally newer in terms of broad acceptance compared to existing acceptance for racial diversity, but it does indeed happen.
Diversity is absolutely a more left leaning idea, especially nowadays, when diversity is actively feared by the right, which is why they are almost always on the side of the racists/sexists/homophobes/transphobes/etc today and throughout history. I’m not saying those on the right can’t accept diversity, but that they often don’t.
The actual definition of wokeness isn’t diversity, but the term “woke” has just become synonymous with any left-leaning ideas (including diversity) because of how commonly people on the right continue to use it as a word to define “anything I don’t like.”
B-b-but I don’t want to actually see those people in my games! I just want them to make them! /s
Selling user data, selling ad placement, subscriptions for paid services, enterprise-grade support contracts, and the like.
They could also take an approach similar to Google, branching back out from being just a browser into a suite of related tools that Chrome can then convince users to switch to (similar to how Chrome gets users to not just use Google search, but also services like Gmail too.)
This is an order to sell, not break up.
Currently, it’s still recommended actions to the court. Nothing has actually been finalized in terms of what they’re going to actually end up trying to make Google do.
Google must not remain in control of Chrome.
While divestiture is likely, they could also spin-off, split-off, or carve-out, which carry completely different implications for Google, but are still an option if they are unable to convince the court to make Google do their original preferred choice.
A split-off could prevent Google from retaining shares in the new company without sacrificing shares in Google itself, and a carve-out could still allow them to “sell” it, but via shares sold in an IPO instead of having to get any actual buyout from another corporation.
By “sell,” they could also mean ending up having Chrome just split off from Google, as a new, independent entity that is its own company, without anybody needing to buy it in the first place.
They definitely will, since they don’t even support any of Google’s standard restore features by default.
They use Seedvault instead, which doesn’t have the capability to restore app logins. I have a feeling Seedvault may end up adding that as a feature in the future, though.
I’m excited for the future, but not as excited for the transition period.
I have similar feelings.
I discovered LLMs before the hype ever began (used GPT-2 well before ChatGPT even existed) and the same with image generation models barely before the hype really took off. (I was an early closed beta tester of DALL-E)
And as my initial fascination grew, along with the interest of my peers, the hype began to take off, and suddenly, instead of being an interesting technology with some novel use cases, it became yet another technology for companies to show to investors (after slapping it in a product in a way no user would ever enjoy) to increase stock prices.
Just as you mentioned with the dotcom bubble, I think this will definitely do a lot of good. LLMs have been great for asking specialized questions about things where I need a better explanation, or rewording/reformatting my notes, but I’ve never once felt the need to have my email client generate every email for me, as Google seems to think I’d want.
If we can just get all the over-hyped corporate garbage out, and replace it with more common-sense development, maybe we’ll actually see it being used in a way that’s beneficial for us.
I understand why people seem to think we should tolerate these views, because “muh free speech,” but to them, I say:
IPFS seems similar to what you’re looking for.
(See: A copy of Wikipedia on IPFS being censorship-resistant, and globally distributed)
I like ArchiveBox, but in my experience, it kept on running into issues saving pages, and stopped functioning after it worked the first few times. I really wish there was a more streamlined application that did a similar thing somewhere out there.
I’ve been looking at Linkwarden’s page archiving solution, but it crashes whenever I try importing any large number of links, so that’s a bust too.
That’s definitely true, I probably should have been a little more clear in my response, specifying that it can run at startup, but doesn’t always do so.
I’ll edit my comment so nobody gets the wrong idea. Thanks for pointing that out!
To put it very simply, the ‘kernel’ has significant control over your OS as it essentially runs above everything else in terms of system privileges.
It can (but not always) run at startup, so this means if you install a game with kernel-level anticheat, the moment your system turns on, the game’s publisher can have software running on your system that can restrict the installation of a particular driver, stop certain software from running, or, even insidiously spy on your system’s activity if they wished to. (and reverse-engineering the code to figure out if they are spying on you is a felony because of DRM-related laws)
It basically means trusting every single game publisher with kernel-level anticheat in their games to have a full view into your system, and the ability to effectively control it, without any legal recourse or transparency, all to try (and usually fail) to stop cheating in games.
Computers are a fundamental part of that process in modern times.
If you were taking a test to assess how much weight you could lift, and you got a robot to lift 2,000 lbs for you, saying you should pass for lifting 2000 lbs would be stupid. The argument wouldn’t make sense. Why? Because the same exact logic applies. The test is to assess you, not the machine.
Just because computers exist, can do things, and are available to you, doesn’t mean that anything to assess your capabilities can now just assess the best available technology instead of you.
Like spell check? Or grammar check?
Spell/Grammar check doesn’t generate large parts of a paper, it refines what you already wrote, by simply rephrasing or fixing typos. If I write a paragraph of text and run it through spell & grammar check, the most you’d get is a paper without spelling errors, and maybe a couple different phrases used to link some words together.
If I asked an LLM to write a paragraph of text about a particular topic, even if I gave it some references of what I knew, I’d likely get a paper written entirely differently from my original mental picture of it, that might include more or less information than I’d intended, with different turns of phrase than I’d use, and no cohesion with whatever I might generate later in a different session with the LLM.
These are not even remotely comparable.
Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?
This is an interesting question, but I think it mistakes a replacement for a tool on a fundamental level.
I use LLMs from time to time to better explain a concept to myself, or to get ideas for how to rephrase some text I’m writing. But if I used the LLM all the time, for all my work, then me being there is sort of pointless.
Because, the thing is, most LLMs aren’t used in a way that conveys info you already know. They primarily operate by simply regurgitating existing information (rather, associations between words) within their model weights. You don’t easily draw out any new insights, perspectives, or content, from something that doesn’t have the capability to do so.
On top of that, let’s use a simple analogy. Let’s say I’m in charge of calculating the math required for a rocket launch. I designate all the work to an automated calculator, which does all the work for me. I don’t know math, since I’ve used a calculator for all math all my life, but the calculator should know.
I am incapable of ever checking, proofreading, or even conceptualizing the output.
If asked about the calculations, I can provide no answer. If they don’t work out, I have no clue why. And if I ever want to compute something more complicated than the calculator can, I can’t, because I don’t even know what the calculator does. I have to then learn everything it knows, before I can exceed its capabilities.
We’ve always used technology to augment human capabilities, but replacing them often just means we can’t progress as easily in the long-term.
Short-term, sure, these papers could be written and replaced by an LLM. Long-term, nobody knows how to write papers. If nobody knows how to properly convey information, where does an LLM get its training data on modern information? How do you properly explain to it what you want? How do you proofread the output?
If you entirely replace human work with that of a machine, you also lose the ability to truly understand, check, and build upon the very thing that replaced you.
Schools are not about education but about privilege, filtering, indoctrination, control, etc.
Many people attending school, primarily higher education like college, are privileged because education costs money, and those with more money are often more privileged. That does not mean school itself is about privilege, it means people with privilege can afford to attend it more easily. Of course, grants, scholarships, and savings still exist, and help many people afford education.
“Filtering” doesn’t exactly provide enough context to make sense in this argument.
Indoctrination, if we go by the definition that defines it as teaching someone to accept a doctrine uncritically, is the opposite of what most educational institutions teach. If you understood how much effort goes into teaching critical thought as a skill to be used within and outside of education, you’d likely see how this doesn’t make much sense. Furthermore, the heavily diverse range of beliefs, people, and viewpoints on campuses often provides a more well-rounded, diverse understanding of the world, and of the people’s views within it, than a non-educational background can.
“Control” is just another fearmongering word. What control, exactly? How is it being applied?
Maybe if a “teacher” has to trick their students in order to enforce pointless manual labor, then it’s not worth doing.
They’re not tricking students, they’re tricking LLMs that students are using to get out of doing the work required of them to get a degree. The entire point of a degree is to signify that you understand the skills and topics required for a particular field. If you don’t want to actually get the knowledge signified by the degree, then you can put “I use ChatGPT and it does just as good” on your resume, and see if employers value that the same.
Maybe if homework can be done by statistics, then it’s not worth doing.
All math homework can be done by a calculator. All the writing courses I did throughout elementary and middle school would have likely graded me higher if I’d used a modern LLM. All the history assignment’s questions could have been answered with access to Wikipedia.
But if I’d done that, I wouldn’t know math, I would know no history, and I wouldn’t be able to properly write any long-form content.
Even when technology exists that can replace functions the human brain can do, we don’t just sacrifice all attempts to use the knowledge ourselves because this machine can do it better, because without that, we would be limiting our future potential.
This sounds fake. It seems like only the most careless students wouldn’t notice this “hidden” prompt or the quote from the dog.
The prompt is likely colored the same as the page to make it visually invisible to the human eye upon first inspection.
And I’m sorry to say, but often times, the students who are the most careless, unwilling to even check work, and simply incapable of doing work themselves, are usually the same ones who use ChatGPT, and don’t even proofread the output.
Just like how the moment their videotape rental history was exposed, that was when privacy became an absolute must in the case of video rental services.
I completely get your point, and to an extent I agree, but I do think there’s still an argument to be made.
For instance, if a theme park was charging an ungodly amount for admission, or maybe, say, charged you on a per-ride basis after you paid admission, slowly adding more and more charges to every activity until half your time was spent just handing over the money to do things, if everyone were to stop going in, the theme park would close down, because they did something that turned users away.
Websites have continually added more and more ads, to the point that reading a news article feels like reading 50% ads, and 50% content. If they never see any pushback, then they’ll just keep heaping on more and more ads until it’s physically impossible to cram any more in.
I feel like this is less of a dunk on the site by not using it in that moment, and more a justifiable way to show that you won’t tolerate the rapidly enshittified landscape of digital advertising, and so these sites will never even have a chance of getting your business in the future.
If something like this happens enough, advertisers might start finding alternative ways to fund their content, (i.e. donation model, subscription, limited free articles then paywall) or ad networks might actually engage with user demands and make their systems less intrusive, or more private. (which can be seen to some degree with, for instance, Mozilla’s acquisition of Anonym)
Even citing Google’s own research, 63% of users use ad blockers because of too many ads, and 48% use it because of annoying ads. The majority of these sites that instantly hit you with a block are often using highly intrusive ads that keep popping up, getting in the way, and taking up way too much space. The exact thing we know makes users not want to come back. It’s their fault users don’t want to see their deliberately maliciously placed ads.
A lot of users (myself most definitely included) use ad blockers primarily for privacy reasons. Ad networks bundle massive amounts of surveillance technology with their ads, which isn’t just intrusive, but it also slows down every single site you go to, across the entire internet. Refusing that practice increases the chance that sites more broadly could shift to more privacy-focused advertising methods.
Google recommends to “Treat your visitors with respect,” but these sites that just instantly slap up an ad blocker removal request before you’ve even seen the content don’t actually respect you, they just hope you’re willing to sacrifice your privacy, and overwhelm yourself with ads, to see content you don’t even know anything about yet. Why should I watch your ads and give up my privacy if you haven’t given me good reason to even care about your content yet?
This is why sites with soft paywalls, those that say you have “x number of free articles remaining,” or those that say “you’ve read x articles this month, would you consider supporting us?” get a higher rate of users disabling adblockers or paying than those that just slap these popups in your face the moment you open the site.
Just as someone already mentioned in this thread, I can vouch for Immich as well. I self host it (currently via Umbrel on a Pi 5 purely for simplicity) and the duplicate detection feature is very handy.
Oh, and the AI face detection feature is great for finding all the photos you have of a given person, but it sometimes screws up and thinks the same person is two different people, but it allows you to merge them anyways, so it’s fine.
The interface is great, there’s no paywalled features (although they do have a “license,” which is purely a donation) and it generally feels pretty slick.
I would warn to anyone considering trying it that it is still in heavy development, and that means it could break and lose all your photos. Keep backups, 3-2-1 rule, all that jazz.