So you’re not actually arguing that the IDF is not committing massive war crimes, you’re just saying you don’t care?
So you’re not actually arguing that the IDF is not committing massive war crimes, you’re just saying you don’t care?
Are you claiming that all of Gaza is Hamas?
Are you referring to Method of Loci? I’ve experimented with it a bit. For a while I would do daily mental walk-throughs of the apartment I grew up in and I practiced visualizing symbols for the 10 digits. After a few months I was able to successfully remember some pretty long numbers. Ironically, I don’t remember how long they were. It wasn’t that useful though. It took me a really long time to “store” numbers; longer than it would to just write it down. I didn’t have a system for storing anything besides digits. Worst of all, the “memory space” was limited to the size of my old apartment. I was able to increase the space by adding detail to rooms but it was never enough to be practical for anything besides trivia. Strangely the repeated “walk-throughs” ended up bringing back memories of smells and textures that I hadn’t thought about in decades
I think I’m much better at remembering and imaging things that can be easily articulated. I recognize my wife with no problem but I can’t really summon a good mental image of her. We have a photo of the night we met. I can visualize details of the clothing and jewelry she was wearing but when I “look” at the image in my mind I can’t really see her face. It’s hard to describe. Almost like there’s an image with a tag that says “link to wife’s face here” without actually loading it. When I really concentrate on it I can wither get a really blurry image of her face, a really zoomed in image, or a sort of “line art” version of her face. I don’t have real prosopagnosia. I can recognize faces, it just takes many more exposures than it does for most people.
I used to do a lot of visualizing meditation. I can get myself to the point where I could imagine a different room all together (for meditation it was always the same fantasy “place” so that made it easier). When I was really into it I could change the perceived orientation of gravity. That is, when I was lying in bed I could sometimes complete the hallucination that I was standing in that “room”. That typically lasted only a few seconds but it was pretty wild.
This (and the human brain in general) is fascinating to me. I’ve always been on the opposite end of aphantasia, although I’ve never been officially diagnosed with hyperphantasia. I don’t understand it at all it just seems natural.
When there’s a question about physical objects I close my eyes and just check. It’s not that my memory is particularly good but I can “synthesize” shapes. I might tell myself a story like, "Start with a point. Expand it into a line segment. Now pull that line parallel to itself to create a rectangle. You can spin that plane around a bit and then grab a point in the middle and pull it up into a pyramid. And so on. I basically watch a color-coded animation when I say something like that.
With music it can be a bit distracting. I’ll go through phases where I get some piece of music stuck in my head and when I do it’s incredibly detailed. I can pick out individual instruments in an orchestra and hear reverb. It can actually get so distracting that I have to play a trick to get it to stop. I need to find a piece of interesting music that I’ve never heard before. I can play that enough times to “drive out” the other one but not enough to “light up” the new one and I’m fine.
As a kid it was obvious that this was not something everyone did and I thought I was special. It turns out that beyond being an interesting curiosity I haven’t found any actual use for it. Too bad. I still find these differences really interesting.
As an aside, I’m also one of those people that’s terrible at remembering names and faces. I often completely forget someone’s name and face within minutes of meeting them. I’ve started using Anki to help with it. I make flashcards of all the people I’m supposed to know and run through them every night. It’s a hack that works well enough that (some) people think I’m one of those people that never forgets a face.
Is that intended as a legal or moral position?
As far as I know, the law doesn’t care much if you make money off of IP violations. There are many cases of individuals getting hefty fines for both the personal use and free distribution of IP. I think if there is commercial use of IP the profits are forfeit to the IP holder. I’m not a lawyer though, so don’t bank on that.
There’s still the initial question too. At present, we let the courts decide if the usage, whether profitable or not, meets the standard of IP violation. Artists routinely take inspiration from one another and sometimes they take it too far. Why should we assume that AI automatically takes it too far and always meets the standard of IP violation?
Yes but there’s a threshold of how much you need to copy before it’s an IP violation.
Copying a single word is usually only enough if it’s a neologism.
Two matching words in a row usually isn’t enough either.
At some point it is enough though and it’s not clear what that point is.
On the other hand it can still be considered an IP violation if there are no exact word matches but it seems sufficiently similar.
Until now we’ve basically asked courts to step in and decide where the line should be on a case by case basis.
We never set the level of allowable copying to 0, we set it to “reasonable”. In theory it’s supposed to be at a level that’s sufficient to, “promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” (US Constitution, Article I, Section 8, Clause 8).
Why is it that with AI we take the extreme position of thinking that an AI that makes use of any information from humans should automatically be considered to be in violation of IP law?
I’m not talking about any particular language.
Modern programming languages are as complex as natural languages. They have sophisticated and flexible grammars. They have huge vocabularies. They’re rich enough that individual projects will have a particular “style”. Programming languages tend to emphasize the imperative and the interrogative over the indicative but they’re all there.
Most programming languages have a few common elements:
Some way to remember things
Some way to repeat sets of instructions
Some way to tell the user what it’s done
Some way to make decisions (ie if X then do Y)
Programmers mix and match those and, depending on the skill of the people involved, end up with Shakespear, Bulwer-Lytton, or something in between.
The essence of programming is to arrange those elements into a configuration that does something useful for you. It’s going to be hard to know what kinds of useful things you can do if you’re completely fresh to the field.
Python and Javascript are great. The main reasons I wouldn’t recommend them for an absolute beginner is that it takes some time to set up and, even after that, there’s a bit of a curve before you can do something interesting.
If they go and change configuration settings in an app, they’re learning to manipulate variables.
If they click a “do this N times” they’ve learned to create a loop.
etc.
I’d actually start by playing around with the automation and customization functionality you already have. Learn to set email sorting filters, get some cool browser extensions and configure them, maybe even start by customizing your windows preferences or making some red stone stuff in Minecraft.
Computers are just tools. Programs are just stuff you tell a computer to do over and over again. All the fancy programming languages give you really good control over how you talk to a computer but I’d start with the computer equivalent of “Me Tarzan, you Jane.”
There’s significant investment in green alternatives. Particularly in China, but in many other places as well.
If I’m being honest with myself I do steer towards and away from certain news outlets based on my perception of their overall trustworthiness. In my ideal world I’d judge articles on their individual merits.
For example. When I was a kid, the Wall Street Journal was top tier in reliability. Nothing changed immediately after Rupert Murdoch bought them but over time I noticed some changes. In particular I started seeing editorials less clearly marked as such and mixed in with regular articles. That struck me as shady editorial decisions. I’ve read enough shoddy WSJ articles since then that I don’t really trust them anymore. That said, they still put out individual articles that are accurate and well sourced.
For practical administration reasons I suspect you’ll have to take the broad approach of just banning some sources that are egregious repeat offenders. Ideally I’d like to see a set of criteria that define what gets sources on that ban list and what can get them removed. If we can identify reliable fact checking organizations perhaps we could use them as a metric (ie any publication that has more than X fact corrections in an N month period is auto-banned).
I hate clickbait but I don’t know how to define it. How do we differentiate them from well written, attention grabbing headlines?
I’d love to see more attention paid to self policing. Eg Ira Glass did the most epic retraction I’ve ever seen. https://www.thisamericanlife.org/460/retraction When they figured out that their story was wrong they didn’t just say, “Oops sorry.” They invited the source back on, and spent a whole hour analyzing where they went wrong. My respect for NPR shot way up that day. It would be great to see a score of how good media outlets are at admitting their mistakes. That would greatly increase my trust in them.
edit: typo
replacement theory
I had to look that up but it was basically what I expected it to be.
Short answer. No. I have no particular fear of white people (or anyone else for that matter) being replaced.
I’m talking less about any concerns of what the demographics should be and more on identifying what we’re talking about. That’s why I brought up the two contrasting demographics of the US vs the world.
Americans, even those with diverse ancestral backgrounds, tend to view the world through the lens of Americans. Individual subgroups within the US tend to view America through the lens of their subgroup. I’ve noticed that diversity means different things to different people and I’m wondering what it will mean here.
A comment elsewhere in this thread illustrates the potential conflict. They note that we want to avoid islamophobia, which I agree with and we want to avoid homophobia, which I also agree with. But they make it sound like it will be easy to reconcile the two on a global scale. I suspect that will be much harder to pull off.
No ulterior motive. My post is intended to be interpreted literally. You seemed to be saying that the MBFC rating is good evidence that we should trust MJ. I’m following up and saying that DN meets the same criteria and should be judged the same way.
The first post in this thread questioned if either DN or MJ should be included in the list of reliable sources. You pointed out that while MBFC cites MJ as having a left bias they also cite them as highly accurate.
DN gets basically the same grade from MBFC as MJ.
Even though “high” accuracy is only their second highest rating, “very high” is typically reserved for academic journals and that makes “high” the best rating that you can reasonably expect from a non-academic journal.
The page for DN also notes that there have been 0 corrections in the past 5 months.
They consider Democracy Now! to have a bias left of Mother Jones but also highly accurate. https://mediabiasfactcheck.com/democracy-now/
Asside: I just discovered https://mediabiasfactcheck.com/2023/07/17/the-latest-fact-checks-curated-by-media-bias-fact-check-07-17-2023/ I found that when I was looking at what it takes for MediaBiasFactCheck to consider a source to have “very highly” reliability rather than simply “high” reliability. Spoilers, you basically need to be an academic journal.
I’m with you on opinion pieces but I wouldn’t over pivot on the objectiveness of “news”.
I’m not sure there actually is such a thing as true objectivity, in practice. There are a ton of ways to inject subjectivity into seemingly objective news. An obvious one is selection bias. Journalists and editors decide what to write about and publish. They decide who gets quoted and which facts get presented. Even if they tell no lies, that leaves a lot of room to present those facts in a variety of different lights.
I think the best we can hope for is independent verifiability. If an article makes a claim, do I just have to believe them or do I have some reasonable way to check, that doesn’t involve the author?
I agree. The acceptance threshold for editorials and opinion pieces are just too low. Even in the Gray Lady they sometimes amount to little more than conspiratorial rants with better grammar and more sophisticated vocabulary.
The standard should ideally be on the articles themselves rather than the publication.
This is a difficult question. I try to focus on the article itself rather than the news site.
The first thing I look for is if they’re rambling. That’s probably not the best criterion but it’s so obvious. If an article doesn’t get to the point in the first few sentences it probably doesn’t have a point.
The second thing I look for is verification. I already know some stuff about the world. If know the article made some mistakes I’ll assume they’re making other mistakes. If they are correct about less well known facts I mentally bump up their reliability a bit.
If they make a statement about a fact I expect them to source it. If their source is some equivalent of “trust me bro” I’m getting out my salt shovel.
Beyond that I’ll look at the track record of the author and the publication. Do they consistently pass or fall short of the reliable news threshold? If so, I adjust my expectations.
The individual articles or statements come first though. I may have very little confidence in Fox and Friends or in Donald Trump but if they get on TV and make independently verifiable statements that check out then it’s true.
In terms of a simple rule that could be practically implemented. Maybe something like, the article must have independently verifiable sources for its claims. One corollary would be, if article A cites article B as a source, don’t post A, just post B directly.
It would be great if there could be some discussion of what exactly “diversity” means. It’s one of those words that people seem to assume is well defined but I can’t get anyone to define it.
I don’t know what the definition should be but I’ve seen variations that assume different definitions and I’m more comfortable with some than with others.
I have a friend who works at a bank. He bragged that his team was 100% diverse. When I asked him what that mean he said there were no white people on his team. Personally, this seems like a bad definition. I have trouble thinking of “diversity” as the removal of some group, even if that group is otherwise over-represented elsewhere. It also ignores any potential diversity around any other factors; either the traditional political factors such as gender or religion or any diversity of thought (do their analysts include both Frequentists and Baysians?)
Reddit mostly has users from the US. Should we consider a Lemmy community diverse if it represents the predominant views and voices of the US? If that’s the case the image above needs more white people. https://en.wikipedia.org/wiki/Demographics_of_the_United_States#Race_and_ethnicity
The world is a big place and anyone could join Lemmy. Should we consider a community diverse if it matches the demographics of the planet? In that case the image above has too many white people. https://en.wikipedia.org/wiki/World_population#Global_demographics
The above focuses on race but it’s more widely applicable. This community will most likely consists of mostly US citizens for the near future. If the community is firmly focused on US ideas it will be more about US opinion of the rest of the world than actual diversity. If it actually does include a globally diverse set of ideas it’s likely to get pretty uncomfortable for the majority of the people here.
This has been going on for much longer than Starlink.
There were a number of observatories built in or near cities. They became mostly useless once we figured out electric lights but we still use them for education sometimes.
SpaceX has been working with the NSF so they can continue to dim Starlink https://spacenews.com/nsf-and-spacex-reach-agreement-to-reduce-starlink-effects-on-astronomy/
Now we’re putting more and more observation capabilities deep into space. JWT is already getting images better than anything you could get on earth, even if you eliminated Starlink and turned off every light on the planet. Ground based astronomical observation is still relevant but we keep coming up with better alternatives.
“Oh, people can come up with statistics to prove anything, Kent. 14% of people know that.”
-Homer Simpson