To add to this spending some time in custody is inconvenient, but losing your rights being convicted of something you didn’t even do is more inconvenient. You think you know what to say until you say the wrong thing and start digging a hole.
I like to code, garden and tinker
To add to this spending some time in custody is inconvenient, but losing your rights being convicted of something you didn’t even do is more inconvenient. You think you know what to say until you say the wrong thing and start digging a hole.
This is good to know, but adds an additional step to simply requiring a passcode to unlock on screen lock.
Just the act of refusing makes the act of seizing your phone legal or not. If you legally give them your phone by your own will, they are able to use all evidence they find in the courts. If you deny to give them your phone, and they seize it anyways and access it you have a valid path to throw the evidence they discover out as an illegal search and seizure of your property. I’m not a lawyer but that is the general thought process on denying them access to your property.
Edit: Just want to say this mostly pretains to United States law and similar legal structures. This advice is not applicable everywhere and you should research your countries rights and legal protections.
I personally rather trust that my device isn’t able to be unlocked without my permission, rather than hope I am able to do some action to disable it in certain situations. The availability of such features is nice, but I would assume I would be incapable of performing such actions in the moment.
My other thought is, how guilty is one perceived if they immediately attempt to lock their phones in such a matter, by a jury of their peers? I rather go the deniability route of I didn’t want to share my passcode vs I locked my phone down cause the cops were grabbing me.
To add to this, don’t use bio-metrics to lock your devices. Cops will “accidentally” use these to unlock devices when they are forcibly seized.
Most oil is not economically salvaged due to the low cost of extraction from wells. At best they’ll try to burn it off, at worst they just won’t give a shit.
I think the point would to be make them like cigarette warning labels. At the moment the text can be hidden on a bottle or can in tiny text. It needs to be a big ugly white box with a black border and large text that gets people’s attention.
From my understanding, you are pretty safe as long as you don’t provoke them (walking through the middle of them might be considered provoking) or near their calves. This article from the UK states “Where recorded, 91% of HSE reported fatalities on the public were caused by cows with calves”. Basically, mothers with a child are going to be very protective.
Cows are a domesticated creature, so they are generally docile, but I would exercise caution because if need be they will use their mass and strength against you. I’ve heard of stories of farmers running from cows and narrowly escaping under a fence. Most of these did involve a farmer trying to separate a calve from it’s mother. I’ve also heard stories of cows jumping fences.
And as far as memes go:
Semi-cold? That’s extra, you’ll be lucky to afford it. The affordable water been sitting out on the pavement for a few weeks.
Yea this is just syntax, every language does it a little different, most popular languages seem to derive off of C in some capacity. Some do it more different than others, and some are unholy conglomerations of unrelated languages that somehow works. Instead of saying why is this different, just ask how does this work. It’s made my life a lot simpler.
var test int
is just int test
in another language.
func (u User) hi () { ... }
is just class User { void hi() { ... } }
in another language (you can guess which language I’m referencing I bet).
map := map[string]int {}
is just Map<String, Integer> map = new HashMap<>()
in another (yes it’s java).
Also RTFM, this is all explained, just different!
Edit: I also know this is a very reductive view of things and there are larger differences, I was mostly approaching this from a newer developers understanding of things and just “getting it to work”.
Sadly it wasn’t a bid to open source the AI, rather than a bid for payment.
This would only affect the 12V rail though no? It’s not like they are beefing up the 5V rail that supplies your USB ports in excessive amounts. Picking a random PSU from pcpartpicker, the CORSAIR RM650e vs RM1200e (650W vs 1200W) both have a +5V@20A rail. There would be no need to have a larger 5V rail to support gaming cards.
Also correct me if I am wrong, most PSU’s are more efficient at 20-50% utilization, not 100%. I’m basing this off the higher ratings for 80 Plus.
Your computer doesn’t “waste” electricity, power usage is on-demand. A PSU generally has 3 “rails”; a 12V (this powers most of the devices), a 5V (for peripherals/USB) and 3.3V (iirc memory modules use this). Modern PSUs are called Switched-mode power supplies that use a Switching voltage regulator which is more efficient than traditional linear regulators.
The efficiency of the PSU/transformer would be what determines if one or the other is more wasteful. Most PSUs (I would argue any PSU of quality) will have a 80 Plus rating that defines how efficiently it can convert power. I am not familiar enough with modern wall chargers to know what their testing at… I could see the low-end wall chargers using more wasteful designs, but a high quality rapid wall charger is probably close to if not on par with a PC PSU. Hopefully someone with more knowledge of these can weigh in on this.
SQL is the industry standard for a reason, it’s well known and it does the job quite well. The important part of any technology is to use it when it’s advantageous, not to use it for everything. SQL works great for looking up relational data, but isn’t a replacement for a filesystem. I’ll try to address each concern separately, and this is only my opinion and not some consensus:
Most programmers aren’t DB experts: Most programmers aren’t “experts”, period, so we need to work with this. IT is a wide and varied field that requires a vast depth of knowledge in specific domains to be an “expert” in just that domain. This is why teams break up responsibilities, the fact the community came in and fixed the issues doesn’t change the fact the program did work before. This is all normal in development, you get things working in an acceptable manner and when the requirements change (in the lemmy example, this would be scaling requirements) you fix those problems.
translation step from binary (program): If you are using SQL to store binary data, this might cause performance issues. SQL isn’t an all in one data store, it’s a database for running queries against relational data. I would say this is an architecture problem, as there are better methods for storing and distributing binary blobs of data. If you are talking about parsing strings, string parsing is probably one of the least demanding parts of a SQL query. Prepared statements can also be used to separate the query logic from the data and alleviate the SQL injection attack vector.
Yes, there are ORMs: And you’ll see a ton of developers despise ORMs. They is an additional layer of abstraction that can either help or hinder depending on the application. Sure, they make things real easy but they can also cause many of the problems you are mentioning, like performance bottlenecks. Query builders can also be used to create SQL queries in a manner similar to an ORM if writing plain string-based queries isn’t ideal.
For your own sanity, please use a formatter for your IDE. This will also help when others (and you) read the code, as indentation is a convenience for understanding program flow. From what I see:
enable
and disable
functions are never called for this portion of codeenabled
variable, if so it never passes scopes between the handleClick
and animation
methodsawait
for invoke
or updateCurrentBox
, causing all the code after either to immediately run. As a result, enabled
is never false
, since it just instantly flips back to true
. I’m not sure what library invoke
is from, but there should be a callback or the function returns a Promise
which can be await
ed.TL;DR: The bot is configured to condense certain instances and communities. At the moment, only beehaw.org is marked to be condensed.
Quickly looking at the source code, it seems ReplyToPostsCommand
uses a SummaryTextWrapper
, which contains an iterable for both CondensedSummaryTextWrapperProvider
and DefaultSummaryTextWrapperProvider
. The DefaultSummaryTextWrapperProvider
has a priority of -1_000
(so it’s always checked last) and is set to always return true
on the supports(Community $community): bool
. CondensedSummaryTextWrapperProvider
references the config/services.yaml for it’s supports(Community $community): bool
call which lists 0 condensed communities and 1 condensed instance, being beehaw.org.
Thermometers, like most measurement devices, are always accurate until you get two of them. Each device has a specific tolerance (or should, otherwise it’s probably a horrible tolerance), for a grill thermometer this will look like -/+5C/10F. Additionally, everything used to read a measurement needs to be calibrated regularly to ensure proper function, otherwise readings cannot be trusted. For a thermometer, the easily accessible way to calibrate are to use ice water (does it read 0C/32F) and boiling water (does it read 100C/212F). Using these constants will allow you to adjust your thermometer and get a (more) accurate reading.
In my humble opinion, we too are simply prediction machines. The main difference is how efficient our brains are at the large number of tasks given for it to accomplish for it’s size and energy requirements. No matter how complex the network is it is still a mapped outcome, just the number of factors weighed is extremely large and therefore gives a more intelligent response. You can see this with each increment in GPT models that use larger and larger parameter sets giving more and more intelligent answers. The fact we call these “hallucinations” shows how effective the predictive math is, and mimics humans abilities to just make things up on the fly when we don’t have a solid knowledge base to back it up.
I do like this quote from the linked paper:
As we will discuss, we find interesting evidence that simple sequence prediction can lead to the formation of a world model.
That is to say, you don’t need complex solutions to map complex problems, you just need to have learned how you got there. It’s never purely random attempts at the problem, it’s always predictive attempts that try to map the expected outcomes and learn by getting it right and wrong.
At this point, it seems fair to conclude the crow is relying on more than surface statistics. It evidently has formed a model of the game it has been hearing about, one that humans can understand and even use to steer the crow’s behavior.
Which is to say that it has a predictive model based on previous games. This does not mean it must rigidly follow previous games, but that by playing many games it can see how each move affects the next. This is a simpler example because most board games are simpler than language with less possible outcomes. This isn’t to say that the crow is now a grand master at the game, but it has the reasoning to understand possible next moves, knows illegal moves, and knows to take the most advantageous move based on it’s current model. This is all predictive in nature, with “illegal” moves being assigned very low probability based on the learned behavior the moves never happen. This also allows possible unknown moves that a different model wouldn’t consider, but overall provides what is statistically the best move based on it’s model. This allows the crow to be placed into unknown situations, and give an intelligent response instead of just going “I don’t know this state, I’ll do something random”. This does not always mean this prediction is correct, but it will most likely be a valid and more than not statistically valid move.
Overall, we aren’t totally sure what “intelligence” is, we are just an organism that has developed more and more capabilities to process information based on a need to survive. But getting down to it, we know neurons take inputs and give outputs based on what it perceives is the best response for the given input, and when enough of these are added together we get “intelligence”. In my opinion it’s still all predictive, its how the networks are trained and gain meaning from the data that isn’t always obvious. It’s only when you blindly accept any answer as correct that you run into these issues we’ve seen with ChatGPT.
Thank you for sharing the article, it was an interesting article and helped clarify my understanding of the topic.
Disclaimer: I am not an AI researcher and just have an interest in AI. Everything I say is probably jibberish, and just my amateur understanding of the AI models used today.
It seems these LLM’s use a clever trick in probability to give words meaning via statistic probabilities on their usage. So any result is just a statistical chance that those words will work well with each other. The number of indexes used to index “tokens” (in this case words), along with the number of layers in the AI model used to correlate usage of these tokens, seems to drastically increase the “intelligence” of these responses. This doesn’t seem able to overcome unknown circumstances, but does what AI does and relies on probability to answer the question. So in those cases, the next closest thing from the training data is substituted and considered “good enough”. I would think some confidence variable is what is truly needed for the current LLMs, as they seem capable of giving meaningful responses but give a “hallucinated” response when not enough data is available to answer the question.
Overall, I would guess this is a limitation in the LLMs ability to map words to meaning. Imagine reading everything ever written, you’d probably be able to make intelligent responses to most questions. Now imagine you were asked something that you never read, but were expected to respond with an answer. This is what I personally feel these “hallucinations” are, or imo best approximations of the LLMs are. You can only answer what you know reliably, otherwise you are just guessing.
It’s is M.2, but not the M/B+M key most M2 SSDs use but rather a A+E meant for WIFI/Bluetooth. According to this video it’s essentially 2 PCI Express x1 lanes and USB 2.0. The video goes on to explain some possible alternative uses:
So while does this slot has it’s uses, it’s not meant to be used for M.2 drives but rather WIFI.