Eh, not really then. If you have some behavior in those 50 copy/pastes that needs to be deleted, you’ve got to delete it 50 times. That’s not easier at all.
Eh, not really then. If you have some behavior in those 50 copy/pastes that needs to be deleted, you’ve got to delete it 50 times. That’s not easier at all.
I’m using the radar network for dispatch and priority for tie breaking/to make sure the resources are distributed evenly.
All my loading stations are simply called “Cargo Pickup” and all of my cargo trains go to any of them with an opening. Once there, the station reports on the red wire the ID of the train in the channel corresponding to the item being loaded (unless another train is already being reported by another station with the same items).
On the demand side, stations look for the ID on the item they need. They copy the ID into the green network on the channel corresponding to their station name. In the simple case, a station serving copper ore to copper smelters copies the train ID from copper on the red network to copper on the green network. But stations can also request multiple ingredients in which case they have some other symbol in their name besides copper ore. (Of course, here too the copying only happens if no other station is requesting a train on that same channel).
Back on the supply side, the station looks through all the IDs on the green network and sends the ones that match the waiting train to the train. The train uses the symbols to activate an interrupt to go to the corresponding station to deliver the goods.
I just set this up today. I haven’t perfected it yet. One minor hiccup is handling the fact that you have no way to atomically access a channel. So two stations could request on the same channel at the same time, corrupting the ID. But that only happens if the stations are activated to make a demand on the exact same tick. It’s not so much that it’s a constant problem, it just bothers me that it could be.
You can also set each train stop’s priority via circuit. I’ve been setting it based on how badly a train is needed at that station.
Because even if it winds up being a bad study, it still evokes a deeper, more important “truth.”
I’m being sarcastic but that’s actually what’s going on here.
No mention of Gemini in their blog post on sge And their AI principles doc says
We acknowledge that large language models (LLMs) like those that power generative AI in Search have the potential to generate responses that seem to reflect opinions or emotions, since they have been trained on language that people use to reflect the human experience. We intentionally trained the models that power SGE to refrain from reflecting a persona. It is not designed to respond in the first person, for example, and we fine-tuned the model to provide objective, neutral responses that are corroborated with web results.
So a custom model.
When you use (read, view, listen to…) copyrighted material you’re subject to the licensing rules, no matter if it’s free (as in beer) or not.
You’ve got that backwards. Copyright protects the owner’s right to distribution. Reading, viewing, listening to a work is never copyright infringement. Which is to say that making it publicly available is the owner exercising their rights.
This means that quoting more than what’s considered fair use is a violation of the license, for instance. In practice a human would not be able to quote exactly a 1000 words document just on the first read but “AI” can, thus infringing one of the licensing clauses.
Only on very specific circumstances, with some particular coaxing, can you get an AI to do this with certain works that are widely quoted throughout its training data. There may be some very small scale copyright violations that occur here but it’s largely a technical hurdle that will be overcome before long (i.e. wholesale regurgitation isn’t an actual goal of AI technology).
Some licensing on copyrighted material is also explicitly forbidding to use the full content by automated systems (once they were web crawlers for search engines)
Again, copyright doesn’t govern how you’re allowed to view a work. robots.txt is not a legally enforceable license. At best, the website owner may be able to restrict access via computer access abuse laws, but not copyright. And it would be completely irrelevant to the question of whether or not AI can train on non-internet data sets like books, movies, etc.
It wasn’t Gemini, but the AI generated suggestions added to the top of Google search. But that AI was specifically trained to regurgitate and reference direct from websites, in an effort to minimize the amount of hallucinated answers.
Point is that accessing a website with an adblocker has never been considered a copyright violation.
a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.
Not really. First of all, creative commons strictly loosens the copyright restrictions on a work. The strongest license is actually no explicit license i.e. “All Rights Reserved.” No derivatives is already included under full, default, copyright.
Second, derivative has a pretty strict legal definition. It’s not enough to say that the derived work was created using a protected work, or even that the derived work couldn’t exist without the protected work. Some examples: create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, produce an image containing the most prominent color in every frame of a movie, or create a search index of the words found on all websites on the internet. All of that is absolutely allowed under even the strictest of copyright protections.
Statistical analysis of copyrighted materials, as in training AI, easily clears that same bar.
We’re not just doing this for the money.
We’re doing it for a shitload of money!
They do, though. They purchase data sets from people with licenses, use open source data sets, and/or scrape publicly available data themselves. Worst case they could download pirated data sets, but that’s copyright infringement committed by the entity distributing the data without the legal authority.
Beyond that, copyright doesn’t protect the work from being used to create something else, as long as you’re not distributing significant portions of it. Movie and book reviewers won that legal battle long ago.
The examples they provided were for very widely distributed stories (i.e. present in the data set many times over). The prompts they used were not provided. How many times they had to prompt was not provided. Their results are very difficult to reproduce, if not impossible, especially on newer models.
I mean, sure, it happens. But it’s not a generalizable problem. You’re not going to get it to regurgitate your Lemmy comment, even if they’ve trained on it. You can’t just go and ask it to write Harry Potter and the goblet of fire for you. It’s not the intended purpose of this technology. I expect it’ll largely be a solved problem in 5-10 years, if not sooner.
I mean you do. All the time. We all do. You’re allowed to use them, you’re just not allowed to copy them. It’s in the name you know, copy right.
I know. But we’re both talking about the same thing. Everyone gets irrelevant and ostensibly novel ads all the time. Cat litter, beauty products, diapers, whatever. They just so happen to have focused their attention on cat litter when they just as easily could have focused on dozens of other products and noticed the same result. And, in truth, it’s unlikely that they are actually novel, just unnoticed before.
I’d be incredibly skeptical of the claim that they’ve never been served a cat litter ad. Everybody gets served ads that are misses. They’re obviously easy to ignore which makes it difficult to recall what they were about. But I have no doubt that they would’ve been served cat-related ads plenty of times before. Cats are, after all, one of the most common pets.
Huh? League is bigger than dota 2. By like a lot. Because League has much more mass appeal than dota. And that was their point, it has a dedicated but niche fanbase.
Hard disagree. There’s plenty of games that are little more than dressed up choose your own adventure stories. Plenty that are meant for chill and relaxing gameplay. Plenty that do little more than guide you through horror scenes. And so on.
And even beyond that, most people don’t even play a game long enough to have any real “skill development over time.” I read from the Civ7 director recently that if you’ve ever finished a game of Civ you’re literally in a minority of the player base. And that tracks with what I’ve heard about other games as well.
Most players of any given game never finish it. Most of those quit at the first sign of frustration and most are on the easiest game difficulties. This would indicate to me that the majority’s conception of “fun” has little to no relation to skill development in the game. They’re there for the moment to moment experiences. Rubber band mechanics are there to evoke those fun experiences more often in the majority of the player base.
The thing is I don’t think it has anything to offer to bring in people from outside the genre. Some people really enjoy it but you kinda have to already be into that kind of thing (DOTA).
I think you’re overstating the importance of games as a platform for skill development as opposed to a platform for, you know, having fun. The fact is that the vast majority of players play any game on one of its lowest difficulty settings.
Rubber banding is made for the core of the game’s audience and challenge-seekers are just not large enough to be that core. Some of those rubber banding mechanics can and are disabled at higher difficulty settings. Others are needed at higher difficulty because the AI can’t compete and the investment in dev time to improve the AI just isn’t worth it because, again, very few people actually play the game at those difficulties.
I’ve got something similar except the number of items I want to produce is set by the constant combinator. The new logistics groups are awesome for this because I can have the combinator synced with things I want in my inventory.
I also have the radar network set up where trains report the items they have and stations request trains with items they want so I can request more materials be delivered to my omni-assembler when I need them.
The downside I’ve been wanting to fix is the need to specify all of the intermediates that are needed. That’s not too hard to fix, of course, just attempt to make ingredients that are missing (like you’re doing).
I’ve also been wanting to try and change from using a constant combinator to using the requests on the logistics network. So then all you’d have to do to get something added to the recipe list is start requesting it.