

Of children ages 12-17,
- 14.3% have been diagnosed with ADHD.
- Including 17.9% of all boys. (Source)
This data is national, but as you can imagine, there are some schools with fewer diagnoses, and some like the one I mentioned where it’s the norm.


Of children ages 12-17,
This data is national, but as you can imagine, there are some schools with fewer diagnoses, and some like the one I mentioned where it’s the norm.


deleted by creator


Yes, what isn’t right is that it’s an upper class private middle school full of wealthy parents with access to psychiatrists who are financially motivated to provide a fashionable diagnosis.
It’s also a way to get extra time (accommodations) on standardized tests (time and a half or even double time) which further widens the success gap between rich and poor students.


According to the administration at a school I’m familiar with, at least 50% of the 5th grade class has ADHD.
So, not having ADHD is the disorder.


Second, I will reiterate that this is a completely disingenuous comparison using a chart which has no discernibly legitimate or verifiable source.
It’s extremely easy to check GRE performance by major. It’s not a secret.
By the way, the illiteracy of CS majors is such a well known problem in academia, did you know many universities don’t even require GRE verbal from CS students anymore?


I’d wager I can communicate just fine.
Again, for emphasis, computer scientists are on average the least literate, least well read college graduates by quite a margin.
This is an empirical fact.
Interestingly, your first instinct is to deny reality, an attitude that’s totally alien to me, but then again I’m not a conservative.


Do you read what you write?
cognitive understanding
as opposed to noncognitive understanding? Wtf are you talking about?


Do I have a source for the fact that computer scientists have no specialized training on topics outside of computer science? (In addition to being apparently illiterate on average, which is what their performance on the GRE implies.)
Fascinating question. Have an upvote.


Most computer scientists have very limited expertise, none of which concerns human society. We might as well survey plumbers for their opinions.
Fun fact: did you know that the average CS major scores worse on reading and writing than a business major?

Imagine that level of cognitive decrepitude.
Bananas are a mono-crop. There was a disease in the latter half of the 20th century that threatened all bananas on the planet. A particular strain survived called the Cavendish. It is what you NOW think of as a banana. The candy tastes like the way bananas used to taste.


deleted by creator


It is not a leading question. The answer just happens to be meaningless.
Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.


deleted by creator


Behaviorally, analog systems are not substrate dependent.
This is partly true, as I already explained at length, since the behavior of any system can be crudely modeled. It’s how LLMs work! But it’s also a non-sequitur.
Modeling what a system can do and doing what a system can do are not the same.


I explicitly explained that you can model an analog machine using a digital computer. When you make a topological map of a weather system (or a brain) or take a digital picture of a flower, you are generating a model. This is the subject of the articles you linked me.
No matter how accurate your digital model of a weather system, however, it will never produce rain. The byproduct of Turing machines (digital models) is strictly discrete.
You can model digital computers using analog computers. And the reverse is also possible. But digital systems are substrate-independent, whereas analog systems are substrate-dependent. They’re fundamentally inextricable from the stuff of which they’re made.
On the other hand, digital models aren’t made of stuff. They’re abstract. You can certainly instantiate a digital model within a physical substrate (silicon chips), the way you can print a picture of an engine on a piece of paper, but it won’t produce torque like an actual engine let alone rain like an actual weather system.
On a separate note, you reallllly need to acquaint yourself with Complexity Theory, if you actually believe our models will ever be anything other than decent estimates.
To learn more, please take a Theoretical Computer Science course.
Irreducibility isn’t a part of physics
Correct. It’s theoretical computer science. Again, analog systems are irreducible to digital ones by definition. They can only be modeled (functionally and crudely).



Biological neurons are actually more digital than artificial neural nets are.
There are three types of computers.
Digital means reducible to a Turing machine. Analog, which includes things like flowers and cats, means irreducible by definition. (Otherwise, they would be digital.)
Brains are analog computers (maybe with some quantum components we don’t understand).
Making a mathematical model of an analog computer is like taking a digital picture of a flower. That picture is not the same as the flower. It won’t work the same way. It will not produce nectar, for instance, or perform photosynthesis.
Everything about how a neuron works is completely undigitizable. There’s integration at the axon hillock; there are gooey vesicles full of neurotransmitters whose expression is chemically mediated, dumped into a synaptic cleft of constantly variegated width and brownian motion to activate receptors whose binding affinity isn’t even consistent. The best we can do is build mathematical models that sort of predict what happens next on average.
These crude neural maps are not themselves engaged in brain activity — the map is not the territory.
Idk where you got the idea that neurons can be digitized, but someone lied to you.
Yes, very.