That’s a pretty shit take. Humankind spent nearly 12 thousand years figuring out the combustion engine. It took 1 million years to figure farming. Compared to that, less than 500 years to create general intelligence will be a blip in time.
i think you’re missing the point, which i took as this - what arts and humanities folks do is valuable (as evidenced by efforts to recreate it) despite common narratives to the contrary.
feel free to audit my comments to confirm my distinct lack of gpt enthusiasm but that question is unanswerable.
What is “creating art”? A distinctly human thing? then trivially no. Idk how many people go with this interpretation though. Although I think many artists and art appreciators do at least some of the time.
Is it drawing pretty pictures? Probably too reductive for even the most hardline tech enthusiasts but computers are already very good at this. If I want to say get my face in something that looks like an old timey oil painting computers are way faster than humans.
Is it making things that make us feel something? They can probably get pretty good at this. Although it’s unclear how novel the results will be most people aren’t exposed to most art so you could probably produce novel feelings on an individual level pretty well.
Art is so fuzzy and used with such a range of definitions it’s not really clear what this is asking.
Even if they’re better the future might still suck. Machines are technically better at all the components of carpentry than humans but I’d rather furniture wasn’t souless minimalist MDF landfill garbage and carpenters could still earn a living. Even if that means my chairs were a bit uneven.
Nah, humans are hardier than robots and will live longer. The power grid will shut down long before the last human settlements near the poles die of crop failure.
Quite easily, yes. Unlike humans, with their limited lifespans and slow minds, Artificial Inteligence could create hundreds of different paintings in the time it’d take me to finish one.
That depends on things we don’t know yet. If it can be brute forced (throw loads of computation power, gazillions of try & error, petabytes of data including human opinions), then yes, “lots of work” can be an equivalent.
If it does not, we have a mystery to solve. Where does this magic come from? It cannot be broken down into data and algorithms, but still emerges in the material world? How? And what is it, if not dependent on knowledge stored in matter?
On the other hand, how do humans come up with good, meaningful art? Talent Practice. Isn’t that just another equivalent of “lots of work”? This magic depends on many learned data points and acquired algorithms, executed by human brains.
There also is survivor bias. Millions of people practice art, but only a tiny fraction is recognized as artists (if you ask the magazines and wallets). Would we apply the same measure to computer generated art, or would we expect them to shine in every instance?
As “good, meaningful art” still lacks a good, meaningful definition, I can see humans moving the goalpost as technology progresses, so that it always remains a human domain. We just like to feel special and have a hard time accepting humiliations like being pushed out of the center of the solar system, or placed on one random planet among billion others, or being just one of many animal species.
Or maybe we are unique in this case. We’ll probably be wiser in a few decades.
What does it even mean to bruteforce creating art? Trying all the possible prompts to some image model?
Doesn’t have to be that random, but can be. Here, I wrote: “throw loads of computation power, gazillions of try & error, petabytes of data including human opinions”.
The approach people take to learning or applying a skill like painting is not bruteforcing, there is actual structure and method to it.
Ok, but isn’t that rather an argument that it can eventually be mastered by a machine? They excel at applying structure and method, with far more accuracy (or the precise amount of desired randomness) and speed than we can.
The idea of brute forcing art comes down to philosophical questions. Do we have some immaterial genie in us, which cannot be seen and described by science, which cannot be recreated by engineers? Engeniers, lol. Is art something which depends on who created it, or does it depend on who views it?
Either way what I meant is that it is thinkable that more computation power and better algorithms bring machines closer to being art creators, although some humans surely will reject that solely based on them being machines. Time will tell.
We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.
LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).
We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.
Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.
And it still can’t understand; its still just sleight of hand.
Yes, thus “passable imitation of understanding”.
The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.
Passable imitation.
You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.
I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.
You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.
One of the engineers who wrote ‘eliza’ had like a deep connection to and relationship with it. Who wrote it.
Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.
gives them an answer
‘An answer’ isnt hard. Magic 8 ball does that. So does a piece of paper that says “drink water, you stupid cunt” This makes me think you’re arguing from commitment or identity rather than knowledge or reason. Or you just don’t care about truth.
Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm’s).
Color me skeptical of your claims in light of this.
I think you’re misreading the point I’m trying to make. I’m not arguing that LLM is AGI or that it can understand anything.
I’m just questioning what the true use case of AGI would be that can’t be achieved by existing expert systems, real humans, or a combination of both.
Sure Deepseek or Copilot won’t answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.
However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn’t that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?
AGI is a fun theoretical concept, but I really don’t see the practical need for a “next step” past the point of expanding and refining our current deep learning models, or how it would improve our world.
I think it’s pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you’re communicating with: “If it talks like a medical docture then surelly it’s a medical doctor”.
Only that’s not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn’t really guaranteed to tell us all that much about the characteristics of what’s on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to “always present a certain kind of image” (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.
All this to say that it doesn’t require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn’t but that’s how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.
They’re probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it’s pigheaded ignorance.
I think the person my previous comment was replying to wasnt malicious; I think they’re really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.
Okay, this is no more a step to AGI than the publication of ‘blindsight’ or me adding tamarind paste to sweeten my tea.
The project isn’t finished, but we know basic stuff. And yeah, sometimes history is weird, sometimes the enlightenment happens because of oblivious assholes having bad opinions about butter and some dude named ‘le rat’ humiliating some assholes in debates.
But llm’s are not a step to AGI. They’re just not. They do nothing intelligence does that we couldn’t already do. Youre doing pareidola. Projecting shit.
A lot of people are really emotionally invested in this tool being a lot of things it’s not. I think because its kind of the last gasp of pretending capitalism can give us something that isnt shit, the last thing that came out before the end enshitification spiral tightened, nevermind the fact that its largely a cause of that, and I don’t think any of you can be critical or clear headed here.
I’m afraid we’re too obsessed with it being the bullshit SciFi toy it isnt that we’ll ignore its real use cases, or worse; apply it to its real use cases, completely misunderstand what its doing, and adeptus mechanics our way into getting so fucking many people killed/maimed-those uses are mostly medicine adjacent.
I was just pointing out that your emotional plea, that this technology is just autocorrect is not an argument in any way.
For it to be one you need to explicitly state the implication of that fact. Yes architecturaly it is autocomplete but that does not obviously imply anything. What is it about autocomplete that barrs a system of the ability to understand?
Humans are made of meat but that does not imply they can’t speak or think.
If I said ‘this is just a spoon’ you’d know what I meant. This is not an emotional appeal.
I’m not saying computers can’t ever think. I’m saying this is just autocorrect, fancy version of the shit I’m using to type this.
Autocorrect is not understanding, and if you don’t understand that, you have zero understanding of either tech or philosophy. This topic is about both, so you really shouldn’t be making assertions. Stick to genuine questions.
This is some pretty weird and lowkey racist exposition on humanity.
Humankind isn’t a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.
It’s not that it took 1 million years to “figure out” farming. It’s that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don’t practice agriculture, because it’s not actually advantageous in many ways – stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.
Also where did you come up with the number 12,000 for “figuring out” the combustion engine? Genuinely curious. Like were we “working on it” for 12k years? I don’t get it. But this isn’t exactly a net positive and has come with some pretty disastrous consequences. I say this because you’re proposing a linear path for “humanity” forward, when the reality is that humans are many things, and progress viewed in this way has a tendency toward racism or at least ethnocentrism.
But also yeah, the point of this meme is “artists are valuable.”
This is some pretty weird and lowkey racist exposition on humanity.
Getting “racism” from that post is a REAL stretch. It’s not even weird, agriculture and mechanization are widely considered good things for humanity as a whole
Humankind isn’t a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.
ANY group of humans beyond the individual is purely just a social construct and classing humans into a single group is no less sensible than grouping people by culture, family, tribe, country etc.
It’s not that it took 1 million years to “figure out” farming. It’s that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don’t practice agriculture, because it’s not actually advantageous in many ways – stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.
Agriculture is certainly more efficient in terms of nutrition production for a given calorie cost. It’s also much more reliable. Arguing against agriculture as a good thing for humanity as a whole is the thing that’s weird.
I’m really not “arguing against agriculture,” I’m pointing out that there are other modes of subsistence that humans still practice, and that that’s perfectly valid. There are legitimate reasons why a culture would collectively reject agriculture.
But in point of fact, agriculture is not actually more efficient or reliable. Agriculture does allow for centralized city states in a way that foraging/hunting/fishing usually doesn’t, with a notable exception of many indigenous groups on the western coast of turtle island.
But the main point I was trying to make is that different expressions of human culture still exist, and not all cultures have followed along the trajectory of the dominant culture. People tend to view colonialism, expansion and everything that means as inevitable, and I think that’s a pretty big problem.
The first heat engines were fire pistons, which go back to prehistory, so 12k to 25k years sounds about right. The next application of steam to make things move happened about 450 BC, about 2.5k years ago. Although not a direct predecessor to the ICE, they all are heat engines.
That’s a pretty shit take. Humankind spent nearly 12 thousand years figuring out the combustion engine. It took 1 million years to figure farming. Compared to that, less than 500 years to create general intelligence will be a blip in time.
i think you’re missing the point, which i took as this - what arts and humanities folks do is valuable (as evidenced by efforts to recreate it) despite common narratives to the contrary.
Of course it’s valuable. So is, e.g., soldering components on a circuit board, but we have robots for doing that at scale now.
Do you think robots will ever become better than humans at creating art, in the same way they’ve become better than us at soldering?
feel free to audit my comments to confirm my distinct lack of gpt enthusiasm but that question is unanswerable.
What is “creating art”? A distinctly human thing? then trivially no. Idk how many people go with this interpretation though. Although I think many artists and art appreciators do at least some of the time.
Is it drawing pretty pictures? Probably too reductive for even the most hardline tech enthusiasts but computers are already very good at this. If I want to say get my face in something that looks like an old timey oil painting computers are way faster than humans.
Is it making things that make us feel something? They can probably get pretty good at this. Although it’s unclear how novel the results will be most people aren’t exposed to most art so you could probably produce novel feelings on an individual level pretty well.
Art is so fuzzy and used with such a range of definitions it’s not really clear what this is asking.
Even if they’re better the future might still suck. Machines are technically better at all the components of carpentry than humans but I’d rather furniture wasn’t souless minimalist MDF landfill garbage and carpenters could still earn a living. Even if that means my chairs were a bit uneven.
Not if climate change drives humans extinct before they can make those improvements
I guess any robots we leave behind will win by forfeit!
Nah, humans are hardier than robots and will live longer. The power grid will shut down long before the last human settlements near the poles die of crop failure.
Well that seems depressingly likely to be accurate.
I’m doing my part by not driving a car, but most people are willing to be part of the problem if it makes their lives easier.
Yep.
Quite easily, yes. Unlike humans, with their limited lifespans and slow minds, Artificial Inteligence could create hundreds of different paintings in the time it’d take me to finish one.
Being able to put out lots of works isn’t the same as being able to come up with good, meaningful art?
That depends on things we don’t know yet. If it can be brute forced (throw loads of computation power, gazillions of try & error, petabytes of data including human opinions), then yes, “lots of work” can be an equivalent.
If it does not, we have a mystery to solve. Where does this magic come from? It cannot be broken down into data and algorithms, but still emerges in the material world? How? And what is it, if not dependent on knowledge stored in matter?
On the other hand, how do humans come up with good, meaningful art?
TalentPractice. Isn’t that just another equivalent of “lots of work”? This magic depends on many learned data points and acquired algorithms, executed by human brains.There also is survivor bias. Millions of people practice art, but only a tiny fraction is recognized as artists (if you ask the magazines and wallets). Would we apply the same measure to computer generated art, or would we expect them to shine in every instance?
As “good, meaningful art” still lacks a good, meaningful definition, I can see humans moving the goalpost as technology progresses, so that it always remains a human domain. We just like to feel special and have a hard time accepting humiliations like being pushed out of the center of the solar system, or placed on one random planet among billion others, or being just one of many animal species.
Or maybe we are unique in this case. We’ll probably be wiser in a few decades.
What does it even mean to bruteforce creating art? Trying all the possible prompts to some image model?
The approach people take to learning or applying a skill like painting is not bruteforcing, there is actual structure and method to it.
Doesn’t have to be that random, but can be. Here, I wrote: “throw loads of computation power, gazillions of try & error, petabytes of data including human opinions”.
Ok, but isn’t that rather an argument that it can eventually be mastered by a machine? They excel at applying structure and method, with far more accuracy (or the precise amount of desired randomness) and speed than we can.
The idea of brute forcing art comes down to philosophical questions. Do we have some immaterial genie in us, which cannot be seen and described by science, which cannot be recreated by engineers? Engeniers, lol. Is art something which depends on who created it, or does it depend on who views it?
Either way what I meant is that it is thinkable that more computation power and better algorithms bring machines closer to being art creators, although some humans surely will reject that solely based on them being machines. Time will tell.
Really only around 80 years between the first machines we’d consider computers and today’s LLMs, so I’d say that’s pretty damn impressive
That’s why the sophon was sent to disrupt our progress. Smh
Llm’s are not a step to agi. Full stop. Lovelace called this like 200 years ago. Turing and minsky called it in the 40s.
We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.
LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).
We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.
Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.
Okay so there are things they’re useful for, but this one in particular is fucking… Not even nonsense.
Also, the ml algos exponentiate necessary clock cycles with each one you add.
So its less a trench coat and more an entire data center
And it still can’t understand; its still just sleight of hand.
Yes, thus “passable imitation of understanding”.
The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.
Passable imitation.
You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.
I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.
You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.
One of the engineers who wrote ‘eliza’ had like a deep connection to and relationship with it. Who wrote it.
Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.
‘An answer’ isnt hard. Magic 8 ball does that. So does a piece of paper that says “drink water, you stupid cunt” This makes me think you’re arguing from commitment or identity rather than knowledge or reason. Or you just don’t care about truth.
Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm’s).
Color me skeptical of your claims in light of this.
I think you’re misreading the point I’m trying to make. I’m not arguing that LLM is AGI or that it can understand anything.
I’m just questioning what the true use case of AGI would be that can’t be achieved by existing expert systems, real humans, or a combination of both.
Sure Deepseek or Copilot won’t answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.
However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn’t that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?
AGI is a fun theoretical concept, but I really don’t see the practical need for a “next step” past the point of expanding and refining our current deep learning models, or how it would improve our world.
Those are not meaningful use cases for llm’s.
And they’re getting worse at even faking it now.
I think it’s pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you’re communicating with: “If it talks like a medical docture then surelly it’s a medical doctor”.
Only that’s not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn’t really guaranteed to tell us all that much about the characteristics of what’s on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to “always present a certain kind of image” (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.
All this to say that it doesn’t require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn’t but that’s how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.
They’re probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it’s pigheaded ignorance.
I think the person my previous comment was replying to wasnt malicious; I think they’re really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.
Odd loop backs there.
Pray tell, when did we achieve AGI so that you can say this with such conviction? Oh, wait, we didn’t - therefore the path there is still unknown.
Okay, this is no more a step to AGI than the publication of ‘blindsight’ or me adding tamarind paste to sweeten my tea.
The project isn’t finished, but we know basic stuff. And yeah, sometimes history is weird, sometimes the enlightenment happens because of oblivious assholes having bad opinions about butter and some dude named ‘le rat’ humiliating some assholes in debates.
But llm’s are not a step to AGI. They’re just not. They do nothing intelligence does that we couldn’t already do. Youre doing pareidola. Projecting shit.
When the Jewish made their first mud golem ages ago?
To create general AI, we first need a way for computers to communicate proficiently with humans.
LLMs are just that.
Its not though. It’s autocorrect. It is not communication. It’s literally autocorrect.
That is not an argument. Let me demonstrate:
Humans can’t communicate. They are meat. They are not communicating. It’s literally meat.
Spanish is not English. Its spanish.
A lot of people are really emotionally invested in this tool being a lot of things it’s not. I think because its kind of the last gasp of pretending capitalism can give us something that isnt shit, the last thing that came out before the end enshitification spiral tightened, nevermind the fact that its largely a cause of that, and I don’t think any of you can be critical or clear headed here.
I’m afraid we’re too obsessed with it being the bullshit SciFi toy it isnt that we’ll ignore its real use cases, or worse; apply it to its real use cases, completely misunderstand what its doing, and adeptus mechanics our way into getting so fucking many people killed/maimed-those uses are mostly medicine adjacent.
I was just pointing out that your emotional plea, that this technology is just autocorrect is not an argument in any way.
For it to be one you need to explicitly state the implication of that fact. Yes architecturaly it is autocomplete but that does not obviously imply anything. What is it about autocomplete that barrs a system of the ability to understand?
Humans are made of meat but that does not imply they can’t speak or think.
If I said ‘this is just a spoon’ you’d know what I meant. This is not an emotional appeal.
I’m not saying computers can’t ever think. I’m saying this is just autocorrect, fancy version of the shit I’m using to type this.
Autocorrect is not understanding, and if you don’t understand that, you have zero understanding of either tech or philosophy. This topic is about both, so you really shouldn’t be making assertions. Stick to genuine questions.
Humanity didn’t spend those times figuring out those things though. Humanity grew that time to make it happen (and AI is younger than 500y IMO).
Also, we are the same persons today than people were then. We just have access to what our parents generation made and so on.
Hence “will be a blip in time”
Completelly disconnected and irrelevant to anything I wrote.
You jinxed it. We aren’t gonna be around for 500 years now are we?
This is some pretty weird and lowkey racist exposition on humanity.
Humankind isn’t a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.
It’s not that it took 1 million years to “figure out” farming. It’s that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don’t practice agriculture, because it’s not actually advantageous in many ways – stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.
Also where did you come up with the number 12,000 for “figuring out” the combustion engine? Genuinely curious. Like were we “working on it” for 12k years? I don’t get it. But this isn’t exactly a net positive and has come with some pretty disastrous consequences. I say this because you’re proposing a linear path for “humanity” forward, when the reality is that humans are many things, and progress viewed in this way has a tendency toward racism or at least ethnocentrism.
But also yeah, the point of this meme is “artists are valuable.”
Getting “racism” from that post is a REAL stretch. It’s not even weird, agriculture and mechanization are widely considered good things for humanity as a whole
ANY group of humans beyond the individual is purely just a social construct and classing humans into a single group is no less sensible than grouping people by culture, family, tribe, country etc.
Agriculture is certainly more efficient in terms of nutrition production for a given calorie cost. It’s also much more reliable. Arguing against agriculture as a good thing for humanity as a whole is the thing that’s weird.
I’m really not “arguing against agriculture,” I’m pointing out that there are other modes of subsistence that humans still practice, and that that’s perfectly valid. There are legitimate reasons why a culture would collectively reject agriculture.
But in point of fact, agriculture is not actually more efficient or reliable. Agriculture does allow for centralized city states in a way that foraging/hunting/fishing usually doesn’t, with a notable exception of many indigenous groups on the western coast of turtle island.
A study positing that in fact, agriculturalists are not more productive and in fact are more prone to famine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3917328/
But the main point I was trying to make is that different expressions of human culture still exist, and not all cultures have followed along the trajectory of the dominant culture. People tend to view colonialism, expansion and everything that means as inevitable, and I think that’s a pretty big problem.
The first heat engines were fire pistons, which go back to prehistory, so 12k to 25k years sounds about right. The next application of steam to make things move happened about 450 BC, about 2.5k years ago. Although not a direct predecessor to the ICE, they all are heat engines.
Fire pistons are so damn cool. Yeah, that makes sense then.
This kind of thinking is dangerous and will hinder planetary unification…
All I’m trying to point out is that distinct cultures are worthy of respect and shouldn’t be glossed over.
But be real with me: can you think of a single effort for “planetary unification” that wasn’t a total nightmare? I sure can’t.
This attitude is what prevents us from unifying…smh