• 0 Posts
  • 40 Comments
Joined 4 months ago
cake
Cake day: July 7th, 2024

help-circle
  • i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background.

    Personally, no offense, but I think this a contradiction in terms. If we cannot define “consciousness” then you cannot say we don’t understand it. Don’t understand what? If you have not defined it, then saying we don’t understand it is like saying we don’t understand akokasdo. There is nothing to understand about akokasdo because it doesn’t mean anything.

    In my opinion, “consciousness” is largely a buzzword, so there is just nothing to understand about it. When we actually talk about meaningful things like intelligence, self-awareness, experience, etc, I can at least have an idea of what is being talked about. But when people talk about “consciousness” it just becomes entirely unclear what the conversation is even about, and in none of these cases is it ever an additional substance that needs some sort of special explanation.

    I have never been convinced of panpsychism, IIT, idealism, dualism, or any of these philosophies or models because they seem to be solutions in search of a problem. They have to convince you there really is a problem in the first place, but they only do so by talking about consciousness vaguely so that you can’t pin down what it is, which makes people think we need some sort of special theory of consciousness, but if you can’t pin down what consciousness is then we don’t need a theory of it at all as there is simply nothing of meaning being discussed.

    They cannot justify themselves in a vacuum. Take IIT for example. In a vacuum, you can say it gives a quantifiable prediction of consciousness, but “consciousness” would just be defined as whatever IIT is quantifying. The issue here is that IIT has not given me a reason to why I should care about them quantifying what they are quantifying. There is a reason, of course, it is implicit. The implicit reason is that what they are quantifying is the same as the “special” consciousness that supposedly needs some sort of “special” explanation (i.e. the “hard problem”), but this implicit reason requires you to not treat IIT in a vacuum.


  • Bruh. We literally don’t even know what consciousness is.

    You are starting from the premise that there is this thing out there called “consciousness” that needs some sort of unique “explanation.” You have to justify that premise. I do agree there is difficulty in figuring out the precise algorithms and physical mechanics that the brain uses to learn so efficiently, but somehow I don’t think this is what you mean by that.

    We don’t know how anesthesia works either, so he looked into that and the best he got was it interrupts a quantom wave collapse in our brains

    There is no such thing as “wave function collapse.” The state vector is just a list of probability amplitudes and you reduce those list of probability amplitudes to a definite outcome because you observed what that outcome is. If I flip a coin and it has a 50% chance of being heads and a 50% chance of being tails, and it lands on tails, I reduce the probability distribution to 100% probability for tails. There is no “collapse” going on here. Objectifying the state vector is a popular trend when talking about quantum mechanics but has never made any sense at all.

    So maybe Roger Penrose just wasted his retirement on this passion project?

    Depends on whether or not he is enjoying himself. If he’s having fun, then it isn’t a waste.


  • The only observer of the mind would be an outside observer looking at you. You yourself are not an observer of your own mind nor could you ever be. I think it was Feuerbach who originally made the analogy that if your eyeballs evolved to look inwardly at themselves, then they could not look outwardly at the outside world. We cannot observe our own brains as they only exist to build models of reality, if our brains had a model of itself it would have no room left over to model the outside world.

    We can only assign an object to be what is “sensing” our thoughts through reflection. Reflection is ultimately still building models of the outside world but the outside world contains a piece of ourselves in a reflection, and this allows us to have some limited sense of what we are. If we lived in a universe where we somehow could never leave an impression upon the world, if we could not see our own hands or see our own faces in the reflection upon a still lake, we would never assign an entity to ourselves at all.

    We assign an entity onto ourselves for the specific purpose of distinguishing ourselves as an object from other objects, but this is not an a priori notion (“I think therefore I am” is lazy sophistry). It is an a posteriori notion derived through reflection upon what we observe. We never actually observe ourselves as such a thing is impossible. At best we can over reflections of ourselves and derive some limited model of what “we” are, but there will always be a gap between what we really are and the reflection of what we are.

    Precisely what is “sensing your thoughts” is yourself derived through reflection which inherently derives from observation of the natural world. Without reflection, it is meaningless to even ask the question as to what is “behind” it. If we could not reflect, we would have no reason to assign anything there at all. If we do include reflection, then the answer to what is there is trivially obvious: what you see in a mirror.




  • Why are you isolating a single algorithm? There are tons of them that speed up various aspects of linear algebra and not just that single one, and many improvements to these algorithms since they were first introduced, there are a lot more in the literature than just in the popular consciousness.

    The point is not that it will speed up every major calculation, but these are calculations that could be made use of, and there will likely even be more similar algorithms discovered if quantum computers are more commonplace. There is a whole branch of research called quantum machine learning that is centered solely around figuring out how to make use of these algorithms to provide performance benefits for machine learning algorithms.

    If they would offer speed benefits, then why wouldn’t you want to have the chip that offers the speed benefits in your phone? Of course, in practical terms, we likely will not have this due to the difficulty and expense of quantum chips, and the fact they currently have to be cooled below to near zero degrees Kelvin. But your argument suggests that if somehow consumers could have access to technology in their phone that would offer performance benefits to their software that they wouldn’t want it.

    That just makes no sense to me. The issue is not that quantum computers could not offer performance benefits in theory. The issue is more about whether or not the theory can be implemented in practical engineering terms, as well as a cost-to-performance ratio. The engineering would have to be good enough to both bring the price down and make the performance benefits high enough to make it worth it.

    It is the same with GPUs. A GPU can only speed up certain problems, and it would thus be even more inefficient to try and force every calculation through the GPU. You have libraries that only call the GPU when it is needed for certain calculations. This ends up offering major performance benefits and if the price of the GPU is low enough and the performance benefits high enough to match what the consumers want, they will buy it. We also have separate AI chips now as well which are making their way into some phones. While there’s no reason at the current moment to believe we will see quantum technology shrunk small and cheap enough to show up in consumer phones, if hypothetically that was the case, I don’t see why consumers wouldn’t want it.

    I am sure clever software developers would figure out how to make use of them if they were available like that. They likely will not be available like that any time in the near future, if ever, but assuming they are, there would probably be a lot of interesting use cases for them that have not even been thought of yet. They will likely remain something largely used by businesses but in my view it will be mostly because of practical concerns. The benefits of them won’t outweigh the cost anytime soon.


  • Uh… one of those algorithms in your list is literally for speeding up linear algebra. Do you think just because it sounds technical it’s “businessy”? All modern technology is technical, that’s what technology is. It would be like someone saying, “GPUs would be useless to regular people because all they mainly do is speed up matrix multiplication. Who cares about that except for businesses?” Many of these algorithms here offer potential speedup for linear algebra operations. That is the basis of both graphics and AI. One of those algorithms is even for machine learning in that list. There are various algorithms for potentially speeding up matrix multiplication in the linear. It’s huge for regular consumers… assuming the technology could ever progress to come to regular consumers.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzCrystals
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 months ago

    OrchOR makes way too many wild claims for there to easily be any evidence for it. Even if we discover quantum effects (in the sense of scalable interference effects which have absolutely not been demonstrated) in the brain that would just demonstrate there are quantum effects in the brain, OrchOR is filled with a lot of assumptions which go far beyond this and would not be anywhere near justified. One of them being its reliance on gravity-induced collapse, which is nonrelativistic, meaning it cannot reproduce the predictions of quantum field theory, our best theory of the natural world.

    A theory is ultimately not just a list of facts but a collection of facts under a single philosophical interpretation of how they relate to one another. This is more of a philosophical issue, but even if OrchOR proves there is gravitational induced collapse and that there is quantum effects in the brain, we would still just take these two facts separately. OrchOR tries to unify them under some bizarre philosophical interpretation called the Penrose–Lucas argument that says because humans can believe things that are not proven, therefore human consciousness must be noncomputable, and because human consciousness is not computable, it must be reducible to something that you cannot algorithmically predict its outcome, which would be true of an objective collapse model. Ergo, wave function collapse causes consciousness.

    Again, even if they proved that there is scalable quantum interference effects in the brain, even if they proved that there is gravitationally induced collapse, that alone does not demonstrate OrchOR unless you actually think the Penrose-Lucas argument makes sense. They would just be two facts which we would take separately as fact. It would just be a fact that there is gravitionally induced collapse, a fact that there is scalable quantum interference effects in the brain but there would be no reason to adopt any of their claims about “consciousness.”

    But even then, there is still no strong evidence that the brain in any way makes use of quantum interference effects, only loose hints that it may or not be possible with microtubules, and there is definitely no evidence of the gravitationally induced collapse.


  • A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.

    I find this sentiment can lead to devolving into quantum woo and mysticism. If you think anyone trying to tell you quantum mechanics can be made sense of rationally must be wrong, then you implicitly are suggesting that quantum mechanics is something that cannot be made sense of, and thus it logically follows that people who are speaking in a way that does not make sense and have no expertise in the subject so they do not even claim to make sense are the more reliable sources.

    It’s really a sentiment I am not a fan of. When we encounter difficult problems that seem mysterious to us, we should treat the mystery as an opportunity to learn. It is very enjoyable, in my view, to read all the different views people put forward to try and make sense of quantum mechanics, to understand it, and then to contemplate on what they have to offer. To me, the joy of a mystery is not to revel in the mystery, but to search for solutions for it, and I will say the academic literature is filled with pretty good accounts of QM these days. It’s been around for a century, a lot of ideas are very developed.

    I also would not take the game Outer Wilds that seriously. It plays into the myth that quantum effects depend upon whether or not you are “looking,” which is simply not the case and largely a myth. You end up with very bizarre and misleading results from this, for example, in the part where you land on the quantum moon and have to look at the picture of it for it to not disappear because your vision is obscured by fog. This makes no sense in light of real physics because the fog is still part of the moon and your ship is still interacting with the fog, so there is no reason it should hop to somewhere else.

    Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.

    The double-slit experiment is a great example of something often misunderstood as somehow evidence observation plays some fundamental role in quantum mechanics. Yes, if you observe the path the two particles take through the slits, the interference pattern disappears. Yet, you can also trivially prove in a few line of calculation that if the particle interacts with a single other particle when it passes through the two slits then it would also lead to a destruction of the interference effects.

    You model this by computing what is called a density matrix for both the particle going through the two slits and the particle it interacts with, and then you do what is called a partial trace whereby you “trace out” the particle it interacts with giving you a reduced density matrix of only the particle that passes through the two slits, and you find as a result of interacting with another particle its coherence terms would reduce to zero, i.e. it would decohere and thus lose the ability to interfere with itself.

    If a single particle interaction can do this, then it is not surprising it interacting with a whole measuring device can do this. It has nothing to do with humans looking at it.

    At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.

    Eh, you should be reading books and papers in the literature if you are serious about this topic. I agree that a lot of philosophy out there is bad so sometimes external influences can be negative, but the solution to that shouldn’t be to entirely avoid reading anything at all, but to dig through the trash to find the hidden gems.

    My views when it comes to philosophy are pretty fringe as most academics believe the human brain can transcend reality and I reject this notion, and I find most philosophy falls right into place if you reject this notion. However, because my views are a bit fringe, I do find most philosophical literature out there unhelpful, but I don’t entirely not engage with it. I have found plenty of philosophers and physicists who have significantly helped develop my views, such as Jocelyn Benoist, Carlo Rovelli, Francois-Igor Pris, and Alexander Bogdanov.


  • This is why many philosophers came to criticize metaphysical logic in the 1800s, viewing it as dealing with absolutes when reality does not actually exist in absolutes, stating that we need some other logical system which could deal with the “fuzziness” of reality more accurately. That was the origin of the notion of dialectical logic from philosophers like Hegel and Engels, which caught on with some popularity in the east but then was mostly forgotten in the west outside of some fringe sections of academia. Even long prior to Bell’s theorem, the physicist Dmitry Blokhintsev, who adhered to this dialectical materialist mode of thought, wrote a whole book on quantum mechanics where the first part he discusses the need to abandon the false illusion of the rigidity and concreteness of reality and shows how this is an illusion even in the classical sciences where everything has uncertainty, all predictions eventually break down, nothing is never possible to actually fully separate something from its environment. These kinds of views heavily influenced the contemporary physicist Carlo Rovelli as well.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzdouble slit
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Both these figures are embarrassingly bad.

    Hoffman confuses function for perception and constantly uses arguments demonstrating things can interpret reality incorrectly (which is purely a question of function) in order to argue they cannot perceive reality “as it is.,” which is a huge non-sequitur. He keeps going around promoting his “theorem” which supposedly “proves” this yet if you read his book where he explains his theorem it is again clearly about function as his theorem only shows that limitations in cognitive and sensory capabilities can lead something to interpret reality incorrectly yet he draws a wild conclusion which he never justifies that this means they do not perceive reality “as it is” at all.

    Kastrup is also just incredibly boring because he never reads books so he is convinced the only two philosophical schools in the universe are his personal idealism and metaphysical realism, which the latter he constantly incorrectly calls “materialism” when not all materialist schools of thought are even metaphysically realist. Unless you are yourself a metaphysical realist, nothing Kastrup has ever written is interesting at all, because he just pretends you don’t exist.

    Metaphysical realism is just a popular worldview in the west that most Laymen tend to naturally take on unwittingly. If you’re a person who has ever read books in your life, then you’d quickly notice that attacking metaphysical realism doesn’t get you to idealism, at best it gets you to metaphysical realism being not a coherent worldview… which that is the only thing I agree with Kastrup with.


  • Classical computers compute using 0s and 1s which refer to something physical like voltage levels of 0v or 3.3v respectively. Quantum computers also compute using 0s and 1s that also refers to something physical, like the spin of an electron which can only be up or down. Although these qubits differ because with a classical bit, there is just one thing to “look at” (called “observables”) if you want to know its value. If I want to know the voltage level is 0 or 1 I can just take out my multimeter and check. There is just one single observable.

    With a qubit, there are actually three observables: σx, σy, and σz. You can think of a qubit like a sphere where you can measure it along its x, y, or z axis. These often correspond in real life to real rotations, for example, you can measure electron spin using something called Stern-Gerlach apparatus and you can measure a different axis by physically rotating the whole apparatus.

    How can a single 0 or 1 be associated with three different observables? Well, the qubit can only have a single 0 or 1 at a time, so, let’s say, you measure its value on the z-axis, so you measure σz, and you get 0 or 1, then the qubit ceases to have values for σx or σy. They just don’t exist anymore. If you then go measure, let’s say, σx, then you will get something entirely random, and then the value for σz will cease to exist. So it can only hold one bit of information at a time, but measuring it on a different axis will “interfere” with that information.

    It’s thus not possible to actually know the values for all the different observables because only one exists at a time, but you can also use them in logic gates where one depends on an axis with no value. For example, if you measure a qubit on the σz axis, you can then pass it through a logic gate where it will flip a second qubit or not flip it because on whether or not σx is 0 or 1. Of course, if you measured σz, then σx has no value, so you can’t say whether or not it will flip the other qubit, but you can say that they would be correlated with one another (if σx is 0 then it will not flip it, if it is 1 then it will, and thus they are related to one another). This is basically what entanglement is.

    Because you cannot know the outcome when you have certain interactions like this, you can only model the system probabilistically based on the information you do know, and because measuring qubits on one axis erases its value on all others, then some information you know about the system can interfere with (cancel out) other information you know about it. Waves also can interfere with each other, and so oddly enough, it turns out you can model how your predictions of the system evolve over the computation using a wave function which then can be used to derive a probability distribution of the results.

    What is even more interesting is that if you have a system like this where you have to model it using a wave function, it turns out it can in principle execute certain algorithms exponentially faster than classical computers. So they are definitely nowhere near the same as classical computers. Their complexity scales up exponentially when trying to simulate quantum computers on a classical computer. Every additional qubit doubles the complexity, and thus it becomes really difficult to even simulate small numbers of qubits. I built my own simulator in C and it uses 45 gigabytes of RAM to simulate just 16. I think the world record is literally only like 56.



  • Even if you believe there really exists a “hard problem of consciousness,” even Chalmers admits such a thing would have to be fundamentally unobservable and indistinguishable from something that does not have it (see his p-zombie argument), so it could never be something discovered by the sciences, or something discovered at all. Believing there is something immaterial about consciousness inherently requires an a priori assumption and cannot be something derived from a posteriori observational evidence.


  • Reading books on natural philosophy. By that I mean, not mathematics of the physics itself, but what do the mathematics actually tell us about the natural world, how to interpret it and think about it, on a more philosophical level. Not a topic I really talk to many people irl on because most people don’t even know what the philosophical problems around this topic. I mean, I’d need a whole whiteboard just to walk someone through Bell’s theorem to even give them an explanation to why it is interesting in the first place. There is too much of a barrier of entry for casual conversation.

    You would think since natural philosophy involves physics that it would not be niche because there are a lot of physicists, but most don’t care about the topic either. If you can plug in the numbers and get the right predictions, then surely that’s sufficient, right? Who cares about what the mathematics actually means? It’s a fair mindset to have, perfectly understandable and valid, but not part of my niche interests, so I just read tons and tons and tons of books and papers regarding a topic which hardly anyone cares. It is very interesting to read like the Einstein-Bohr debates, or Schrodinger for example trying to salvage continuity viewing a loss of continuity as a breakdown in classical notion of causality, or some of the contemporary discussions on the subject such as Carlo Rovelli’s relational quantum mechanics or Francois-Igor Pris’ contextual realist interpretation. Things like that.

    It doesn’t even seem to be that popular of a topic among philosophers, because most don’t want to take the time to learn the math behind something like Bell’s theorem (it’s honestly not that hard, just a bit of linear algebra). So as a topic it’s pretty niche but I have a weird autistic obsession over it for some reason. Reading books and papers on these debates contributes nothing at all practically beneficial to my life and there isn’t a single person I know outside of online contacts who even knows wtf I’m talking about but I still find it fascinating for some reason.


  • bunchberry@lemmy.worldto196@lemmy.blahaj.zoneRule elitism
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    We feel conscious and have an internal experience

    It does not make sense to add the qualifier “internal” unless it is being contrasted with “external.” It makes no sense to say “I’m inside this house” unless you’re contrasting it with “as opposed to outside the house.” Speaking of “internal experience” is a bit odd in my view because it implies there is such thing as an “external experience”. What would that even be?

    What about the p-zombie, the human person who just doesn’t have an internal experience and just had a set of rules, but acts like every other human?

    The p-zombie argument doesn’t make sense as you can only conceive of things that are remixes of what you’ve seen before. I have never seen a pink elephant but I’ve seen pink things and I’ve seen elephants so I can remix them in my mind and imagine it. But if you ask me to imagine an elephant a color I’ve never seen before? I just can’t do it, I wouldn’t even know what that means. Indeed, a person blind since birth cannot “see” at all, not in their imagination, not even in their dreams.

    The p-zombie argument asks us to conceive of two people that are not observably different in every way yet still different because one is lacking some property that the other has. But if you’re claiming you can conceive of this, I just don’t believe you. You’re probably playing some mental tricks on yourself to make you think you can conceive of it but you cannot. If there is nothing observably different about them then there is nothing conceivably different about them either.

    What about a cat, who apparently has a less complex internal experience, but seems to act like we’d expect if it has something like that? What about a tick, or a louse? What about a water bear? A tree? A paramecium? A bacteria? A computer program?

    This is what Thomas Nagel and David Chalmers ask and then settles on “mammals only” because they have an unjustified mammalian bias. Like I said, there is no “internal” experience, there is just experience. Nagel and Chalmers both rely on an unjustified premise that “point-of-view” is unique to mammalian brains because supposedly objective reality is point-of-view independent and since experience clearly has an aspect of point-of-view then that means experience too must be a product purely of mammalian brains, and then demands the “physicalists” prove how non-experiential reality gives rise to the experiential realm.

    But the entire premise is arbitrary and wrong. Objective reality is not point-of-view independent. In general relativity, reality literally change depending on your point-of-view. Time passes a bit faster for people standing up than people sitting down, lengths of rulers can change between observers, velocity of objects can change as well. Relational quantum mechanics goes even further and shows that all variable properties of particles depend upon point-of-view.

    The idea that objective reality is point-of-view independent is just entirely false. It is point-of-view dependent all the way down. Experience is just objective reality as it actually exists independent of the observer but dependent upon the point-of-view in which they occupy. It has nothing to do with mammalian brains, “consciousness,” or subjectivity. If reality is point-of-view dependent all the way down, then it is not even possible to conceive of an intelligent being that would occupy a unique point-of-view, because everything occupies their own unique point-of-view, even a rock. It’s not a byproduct of the “conscious mind” but just a property of objective reality: experience is objective reality independent of the observer, but dependent upon the context of that experience.

    There’s a continuum one could construct that includes all those things and ranks them by how similar their behaviors are to ours, and calls the things close to us conscious and the things farther away not, but the line is ever going to be fuzzy. There’s no categorical difference that separates one end of the spectrum from the other, it’s just about picking where to put the line.

    When you go down this continuum what gradually disappears is cognition, that is to say, the ability to think about, reflect upon, be self-aware of, one’s point-of-view. The point-of-viewness of reality, or more simply the contextual nature of reality, does not disappear at any point. Only the ability to talk about it disappears. A rock cannot tell you anything about what it’s like to be a rock from its context, it has no ability to reflect upon the point-of-view it occupies.

    Although you’re right there is no hard-and-fast line for cognition, but that’s true of anything in nature. There’s no hard-and-fast line for anything. Take a cat for example, where does the cat begin and end, both in space in time? Create a rigorous definition of its borders. You won’t be able to do it. All our conceptions are human creations and therefore a bit fuzzy. Reality is infinitely complex and we cannot deal with the infinite complexity all at once so we break it up into chunks that are easier to work with: cats, dogs, trees, red, blue, hydrogen, helium, etc. But you always find when you look at these things a little more closely that their nature as discrete “things” becomes rather fuzzy and disappears.


  • There shouldn’t be a distinction between quantum and non-quantum objects. That’s the mystery. Why can’t large objects exhibit quantum properties?

    What makes quantum mechanics distinct from classical mechanics is the fact that not only are there interference effects, but statistically correlated systems (i.e. “entangled”) can seem to interfere with one another in a way that cannot be explained classically, at least not without superluminal communication, or introducing something else strange like the existence of negative probabilities.

    If it wasn’t for these kinds of interference effects, then we could just chalk up quantum randomness to classical randomness, i.e. it would just be the same as any old form of statistical mechanics. The randomness itself isn’t really that much of a defining feature of quantum mechanics.

    The reason I say all this is because we actually do know why there is a distinction between quantum and non-quantum objects and why large objects do not exhibit quantum properties. It is a mixture of two factors. First, larger systems like big molecules have smaller wavelengths, so interference with other molecules becomes harder and harder to detect. Second, there is decoherence. Even small particles, if they interact with a ton of other particles and you average over these interactions, you will find that the interference terms (the “coherences” in the density matrix) converge to zero, i.e. when you inject noise into a system its average behavior converges to a classical probability distribution.

    Hence, we already know why there is a seeming “transition” from quantum to classical. This doesn’t get rid of the fact that it is still statistical in nature, it doesn’t give you a reason as to why a particle that has a 50% chance of being over there and a 50% chance of being over here, that when you measure it and find it is over here, that it wasn’t over there. Decoherence doesn’t tell you why you actually get the results you do from a measurement, it’s still fundamentally random (which bothers people for some reason?).

    But it is well-understood how quantum probabilities converge to classical probabilities. There have even been studies that have reversed the process of decoherence.


  • For the first question, I would recommend reading the philosopher and physicist Francois-Igor Pris who not only seems to understand the deep philosophical origins of the problem, but also provides probably the simplest solution to it. Pris points out that we cannot treat the philosophical ramification in isolation, as if the difficulty in understanding quantum physics originates from quantum physics itself. It must originate from a framework in which we are trying to apply to quantum physics that just breaks down, and therefore it must originate from preconceived philosophical notions people have before even learning of quantum physics.

    In other words, you have to go back to the drawing board, question very foundational philosophical notions. He believes that it originates from the belief in metaphysical realism in the traditional sense, which is the idea that there is an objective reality but it is purely metaphysical, i.e. entirely invisible because what we perceive is merely an illusion created by the conscious mind, but somehow it is given rise to by equivalent objects that are impossible to see. For example, if you have a concept of a rock in your mind, that concept “reflects” a rock that is impossible to see, what Kant had called the thing-in-itself. How can a reality that is impossible to observe ever “give rise to” what we observe? This is basically the mind-body problem.

    Most academics refuse to put forward a coherent answer to this, and in a Newtonian framework it can be ignored. This problem resurfaces in quantum physics, because you have the same kind of problem yet again. What is a measurement if not an observation, and what is an observation if not an experience? You have a whole world of invisible waves floating around in Hilbert space that suddenly transform themselves into something we can observe (i.e. experience) the moment we attempt to look at them, i.e. they transform themselves suddenly into observable particles in spacetime the moment we look.

    His point is ultimately that, because people push off coming up with a philosophical solution to the mind-body problem, when it resurfaces as the measurement problem, people have no idea how to even approach it. However, he also points out that any approach you do take ultimately parallels whatever solution you would take to the mind-body problem.

    For example, eliminative materialists say the visible world does not actually exist but only the nonvisible world and that our belief we can experience things is an illusion. This parallels the Many Worlds Interpretation which gets rid of physical particles and thus gets rid of all observables and only has waves evolving in Hilbert space without observables. Idealists argue in favor of getting rid of invisible reality and just speak of the mind, which if you read the philosophical literature you will indeed find a lot of academics who are idealists who try to justify it with quantum mechanics.

    Both of these positions are, in my view, problematic, and I like Pris’ his own solution based on Jocelyn Benoist’s philosophy of contextual realism which is in turn based off of Ludwig Wittgenstein’s writings. Benoist has written extensively against all the arguments claiming that reality is invisible and has instead argued that what we experience is objective reality as it is exists independent of the observer but dependent upon the context of the observation. Thus he is critical of pretty much all of modern philosophers who overwhelmingly adhere either to metaphysical realism or to idealism. There is no mind-body problem under this framework because reality was never invisible to begin with, so there is no “explanatory gap.”

    Apply this thinking to quantum mechanics then it also provides a solution to the measurement problem that is probably the simplest and most intuitive and is very similar to Carlo Rovelli’s interpretation. Reality depends upon context all the way down, meaning that the properties of systems must be context variant. And that’s really the end of the story, no spooky action at a distance, no multiverse, no particles in two places at once, no language of observer-dependence, etc.

    Whenever you describe physical reality, you have to pick a coordinate system as reality depends upon context and is not “absolute,” or as Rovelli would say, reality depends upon the relations of a system to every other system. Hence, if you want to describe a system, you have to pick a coordinate system under which it will be “observed,” kind of like a reference frame, but the object you choose as the basis of the coordinate system has to actually interact with the other object. The wave function then is just a way for accounting for the system’s context as it incorporates the relations between the system being used as the basis of the reference frame and the object that it will interact with.

    Basically, it is not much different from Copenhagen, except “observer-dependence” is replaced by “context-dependence” as the properties of systems are context variant and any physical system, even a rock, can be used as the basis of the coordinate system. But, of course, if you want to predict what you will observe, then you always implicitly use your own context as the basis of the coordinate system. This is a realist stance, but not a metaphysical realist stance, because the states of particles are not absolute, there is no thing-in-itself, and the reality is precisely what you perceive and not some waves in Hilbert space beyond it (these are instead treated as tools for predicting what the value will be when you measure it, and not itself an entity). Although, it is only whether or not they have a property at all that is context variant.

    If two observers have interacted with the same particle, they will agree as to its state, as you do not get disagreements of the actual values of those particles, only whether or not they have a state at all. They would not be verbal disagreements either, because if an observer measures the state of a particle then goes and tells it to someone else, then it also indirectly enters their context as they would become correlated with that particle through their friend. You only get disagreements if there is no contact. For example, Wigner’s friend paradox, where his friend has measured the particle but has not told him the results nor has he measured it himself, from his context it would indeed have no state.

    The “collapse” would then not be a collapse of a physical “wave” but, again, reality is context variant, and so if you interact with a system, then it changes your relation to it, so you have to update the wave function to account for a change in context, kind of like if you change your reference frame in Galilean relativity. Everything is interpreted through this lens whereby nature is treated as context variant in this way, and it resolves all the paradoxes without introducing anything else. So if you can accept that one premise then everything else is explained. By abandoning metaphysical realism, it also simultaneously solves the other philosophical problems that originate from that point of view, i.e. the “hard problem” does not even make sense in a contextual realist framework and is not applicable.


  • Yes, there are a lot of intuitive understandings in the literature if you’re willing to look for it. The problem is that most people believe in a Newtonian view of the world which just is not compatible with quantum physics, so it requires you to alter some philosophical beliefs, and physics professors don’t really want to get into philosophical arguments, so it’s not really possible to reach a consensus on the question in physics departments. Even worse, there’s rarely a consensus on anything if you go to the philosophy department. So it’s not really that there are not very simple and intuitive ways to understand quantum mechanics, it’s that it’s not possible to get people to agree upon a way to interpret it, so there is a mentality to just avoid interpretation at all so that students don’t get distracted from actually understanding the math.


  • That’s actually not quite accurate, although that is how it is commonly interpreted. The reason it is not accurate is because Bell’s theorem simply doesn’t show there is no hidden variables and indeed even Bell himself states very clearly what the theorem proves in the conclusion of his paper.

    In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.[1]

    In other words, you can have hidden variables, but those hidden variables would not be Lorentz invariant. What is Lorentz invariance? Well, to be “invariant” basically means to be absolute, that is to say, unchanging based on reference frame. The term Lorentz here refers to Lorentz transformations under Minkowski space, i.e. the four-dimensional spacetime described by special relativity.

    This implies you can actually have hidden variables under one of two conditions:

    1. Those hidden variables are invariant under some other framework that is not special relativity, basically meaning the signals would have to travel faster than light and thus would contradict special relativity and you would need to replace it with some other framework.
    2. Those hidden variables are variant. That would mean they do indeed change based on reference frame. This would allow local hidden variable theories and thus even allow for current quantum mechanics to be interpreted as a statistical theory in a more classical sense as it even evades the PBR theorem.[2]

    The first view is unpopular because special relativity is the basis of quantum field theory, and thus contradicting it would contradict with one of our best theories of nature. There has been some fringe research into figuring out ways to reformulate special relativity to make it compatible with invariant hidden variables,[3] but given quantum mechanics has been around for over a century and nobody has figured this out, I wouldn’t get your hopes up.

    The second view is unpopular because it can be shown to violate a more subtle intuition we all tend to have, but is taken for granted so much I’m not sure if there’s even a name for it. The intuition is that not only should there be no mathematical contradictions within a single given reference frame so that an observer will never see the laws of physics break down, but that there should additionally be no contradictions when all possible reference frames are considered simultaneously.

    It is not physically possible to observe all reference frames simulatenously, and thus one can argue that such an assumption should be abandoned because it is metaphysical and not something you can ever observe in practice.[4] Note that inconsistency between all reference frames considered simulatenously does not mean observers will disagree over the facts, because if one observer asks another for information about a measurement result, they are still acquiring information about that result from their reference frame, just indirectly, and thus they would never run into a disagreement in practice.

    However, people still tend to find it too intuitive to abandon this notion of simultaneous consistency, so it remains unpopular and most physicists choose to just interpret quantum mechanics as if there are no hidden variables at all. #1 you can argue is enforced by the evidence, but #2 is more of a philosophical position, so ultimately the view that there are no hidden variables is not “proven” but proven if you accept certain philosophical assumptions.

    There is actually a second way to restore local hidden variables which I did not go into detail here which is superdeterminism. Superdeterminism basically argues that if you did just have a theory which describes how particles behave now but a more holistic theory that includes the entire initial state of the universe going back to the Big Bang and tracing out how all particles evolved to the state they are now, you can place restrictions on how that system would develop that would such that it would always reproduce the correlations we see even with hidden variables that is indeed Lorentz invariant.

    Although, the obvious problem is that it would never actually be possible to have such a theory, we cannot know the complete initial configuration of all particles in the universe, and so it’s not obvious how you would derive the correlations between particles beforehand. You would instead have to just assume they “know” how to be correlated already, which makes them equivalent to nonlocal hidden variable theories, and thus it is not entirely clear how they could be made Lorentz invariant. Not sure if anyone’s ever put forward a complete model in this framework either, same issue with nonlocal hidden variable theories.