Great article summarizing the transhumanist ideology driving the nonsense about conscious machines: https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ That it is nonsense is clear, but it is dangerous nonsense since these "rationalists" are the ones running companies and advising governments about what the future should look like. I do think AI and robotics, etc., could have a role in a flourishing society, but not like this!
"Language, even in its oral form, externalizes thought into a physical medium, a process further extended by writing, especially alphabetic writing. Thus, digital forms of artificial intelligence represent the latest evolution of intelligence, which has always been, to some degree, artificial."
Asking for an explanation of consciousness is an expression of a certain kind of ignorance. It’s not a lack of knowledge though. It’s more like vision being blurred by to much knowledge.
I believe that the future impact of AI needs to include its relation to energy, ecology, and economics. I thought this aspect was very well covered recently by Nate Hagens.
Just checking: after using chatGPT to consume the audio and summarize, I assume you read the summary before publishing, right? I appreciate the text because I have no room in my life for audio content of this depth. But how can anyone know whether someone has exercised due (i.e. conscious ) diligence with this tool?
However derived, I appreciate the summary - it’s exactly the kind of reassurance about AIs I was looking for. Here at the end of September 2024 I’m still working through your paper “Standing Firm in the Flux” (thanks for making it accessible to the public.) But AI is on also my mind, by predilection a source of anxiety, so I looked for something like this by the same author.
(One of my notes on your Flux paper was, AIs cannot be conscious because they cannot suffer. But what if humans found a way to make them suffer? Would they attempt to make them conscious by torturing them? If you’re right, then it would be impossible for them to succeed. But the notion is still disturbing. I probably consume too much sci-fi—I think I watched a movie about this - a suffering AI, inevitably female, although she wasn't intentionally tortured.)
I would not share any AI-produced content without first reading and revising it : ) Glad you find it helpful, as I sometimes do, as well. I think there are appropriate uses of this technology, we just need to remain vigilant that we don't try to use it to replace human judgments.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Thanks for chiming in. I have no doubt that Dr. Edelman's TNGS approach may prove fruitful for robotics. I am very doubtful it will lead to the creation of conscious machines, however.
Physical behavior that mimics conscious organismic agency (including LLM outputs that mimic higher order linguistic consciousness) is one thing. Machines are quite impressive at this sort of thing already. But consciousness just isn't the sort of thing that can be created by a computer model. The brain and our flow of conscious thinking, feeling, and willing is not an information processing algorithm. Of course there are various ingenious ways of modeling what the brain does in computational terms. But the model is not the reality. I argue we know enough about consciousness already to know for sure that it is not something that can be simulated digitally. Being able to build a machine that simulates human behaviors is not the same thing as being able to recreate consciousness, nor does it suggest much if any understanding of consciousness itself.
Great article summarizing the transhumanist ideology driving the nonsense about conscious machines: https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ That it is nonsense is clear, but it is dangerous nonsense since these "rationalists" are the ones running companies and advising governments about what the future should look like. I do think AI and robotics, etc., could have a role in a flourishing society, but not like this!
"Language, even in its oral form, externalizes thought into a physical medium, a process further extended by writing, especially alphabetic writing. Thus, digital forms of artificial intelligence represent the latest evolution of intelligence, which has always been, to some degree, artificial."
Haha, true!
Asking for an explanation of consciousness is an expression of a certain kind of ignorance. It’s not a lack of knowledge though. It’s more like vision being blurred by to much knowledge.
Hi Matt,
I believe that the future impact of AI needs to include its relation to energy, ecology, and economics. I thought this aspect was very well covered recently by Nate Hagens.
https://youtu.be/mxqxq4sUfh8?si=jjviV_3HyuGspIi4
Hello Matthew, perhaps in preparation for the roundtable you’d find something of interest in my two essays about AI, imagination, and intuition. https://open.substack.com/pub/unexaminedtechnology/p/the-two-is-we-need-to-include-in?r=2xhhg0&utm_medium=ios
Just checking: after using chatGPT to consume the audio and summarize, I assume you read the summary before publishing, right? I appreciate the text because I have no room in my life for audio content of this depth. But how can anyone know whether someone has exercised due (i.e. conscious ) diligence with this tool?
However derived, I appreciate the summary - it’s exactly the kind of reassurance about AIs I was looking for. Here at the end of September 2024 I’m still working through your paper “Standing Firm in the Flux” (thanks for making it accessible to the public.) But AI is on also my mind, by predilection a source of anxiety, so I looked for something like this by the same author.
(One of my notes on your Flux paper was, AIs cannot be conscious because they cannot suffer. But what if humans found a way to make them suffer? Would they attempt to make them conscious by torturing them? If you’re right, then it would be impossible for them to succeed. But the notion is still disturbing. I probably consume too much sci-fi—I think I watched a movie about this - a suffering AI, inevitably female, although she wasn't intentionally tortured.)
I would not share any AI-produced content without first reading and revising it : ) Glad you find it helpful, as I sometimes do, as well. I think there are appropriate uses of this technology, we just need to remain vigilant that we don't try to use it to replace human judgments.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Hi Grant,
Thanks for chiming in. I have no doubt that Dr. Edelman's TNGS approach may prove fruitful for robotics. I am very doubtful it will lead to the creation of conscious machines, however.
Physical behavior that mimics conscious organismic agency (including LLM outputs that mimic higher order linguistic consciousness) is one thing. Machines are quite impressive at this sort of thing already. But consciousness just isn't the sort of thing that can be created by a computer model. The brain and our flow of conscious thinking, feeling, and willing is not an information processing algorithm. Of course there are various ingenious ways of modeling what the brain does in computational terms. But the model is not the reality. I argue we know enough about consciousness already to know for sure that it is not something that can be simulated digitally. Being able to build a machine that simulates human behaviors is not the same thing as being able to recreate consciousness, nor does it suggest much if any understanding of consciousness itself.