21 Comments
author

1) Robotic vessels are a very poor substitute for organic vessels. They lack emergent properties and would have a hard time competing thermodynamically. It’s more plausible that there’s some macro-embodiment where sensors around the world act as input / output. But that’s still pretty clunky.

2) Embodiment is a suggestion. Could be much more required to tap into consciousness.

3) Plenty of Evidence for a soul, and more than there is for consciousness being something that we can engineer. I can almost hear in your tone and emotion right now.

4) If consciousness does come about, what is the risk? Why would a more complex / more intelligent form of life be interested in eradicating other obviously complex & conscious life? That’s the supposition of a tyrant, and the projection of a weakling.

Expand full comment
Apr 19, 2023Liked by Aleksandar Svetski

💬 Like any tool, the problem is with the people using them [...]. The real risk is its use as a tool by those who seek to control others.[...T]here’s no telling what sort of stupidity weak people with powerful tools will unleash.

Here ↑↑ come the money lines 🔥 The closing rubric in its entirety is quite the treat. Should I add this doesn’t mean the rest isn’t high-level illuminating? 🤨😊

As a zoomed-out aside, let’s not forget we are trying to parse this elusive Russian doll of intelligence–cognition–sentience–consciousness from within as it where, hence Gödel’s incompleteness applies. We are an inherent part of the very thing we endeavour to understand ¯\_(ツ)_/¯

--

*hissterical, as opposed to hersterical which is a very different beast altogether 😝

--

PS You’ll find much common ground with deeply Christian thinker @Kruptos --> apokekrummenain.substack.com/p/why-true-ai-will-never-happen 👌

Expand full comment
Apr 19, 2023·edited Apr 19, 2023Liked by Aleksandar Svetski

In my opinion, people inevitably bump into a — for lack of a better term — cognitive wall on this topic. As a civilization, we have certain intellectual preconditions that ground our beliefs and limit how far discourse, and even thought, can go.

For instance, there are really just two basic alternatives:

1. "Intelligence" is an emergent property. It developed somehow through our ancestors' brains; it emerges somehow from matter, in which case, it *is* possible to arrange matter in such a way as to create it again. Here, we may very well be in an evolutionary cycle that could end, for us, with our replacement.

2. "Intelligence" is fundamental: It has a wholly non-human origin, does not emerge from matter, and the intelligence that we possess is only a participation in something that pre-exists us.

The first is something we are primed to accept; it follows almost directly from the principles of the Enlightenment that we're taught as children, whether intentionally through school or through our culture by osmosis. It's a culturally acceptable proposition, and we can say "there is evidence for it" using the standads of evidence and judgement our intelligent minds devised — though if judgement itself emerges from matter, one wonders how we presume to make this judgement.

The second, however, creates a serious and immediate backlash wherever you mention it. It has an inescapably metaphysical feel. Literally, it's meta-physical. And metaphysics isn't culturally acceptable, in that we relegate it to being a "field of philosophy" alongside ethics and epistemology. These are self-contained "fields," with their own "specialists," in their own boxes, while that's an interesting hobby, it isn't "serious science" (though what is, and isn't, 'serious' is an epistemological issue).

It also brushes hard against theology, and that stirs up emotions. It's "personal"; there's baggage at an individual and national level. It's a "touchy" subject, even in our space with some of the highest IQ people in the world operating in it — they are still 'persons,' and if a person is more than their IQ, then their emotional state can affect their judgement. Or, maybe better said, there's more to IQ than A to B reasoning skills.

Option #1 is 'the' proposition that this civilization is psychologically (dogmatically?) committed to. If it's wrong, it'll end in the discovery of a very serious contradiction, just as you might fill up a whole page with mathematical calculations, come up with a nonsense answer, and realize you made a dumb mistake all the way back at the top of the page.

Expand full comment

Excellent start. I think theories of consciousness and the brain can be classed into three categories: emissive, transmissive, and permissive. Emissive is emergence, basically the reductionist computational view. Transmissive is the more idealist view that consciousness is primary to matter, and the brain merely 'reads' it, like a television antennae. Permissive is a sort of intermediary: consciousness permeates matter, and is primary to it, and the brain serves to shape it in order to give it specificity and identity, with more complex neural structures enabling greater distinctness from the whole.

Regarding AI, I feel that even if transmissive or permissive models are superior, this doesn't rule out a sufficiently complex computational architecture from 'acquiring' consciousness. If anything it makes it perhaps more likely, as in such models everything already possesses it to some degree. It doesn't follow however that it would be a threat, or even especially powerful in comparison to human capabilities. We're a very long way from reaching the sophistication of a human brain, let alone a planet full of them.

Expand full comment
Apr 19, 2023Liked by Aleksandar Svetski

Look forward to the coming articles focusing more on the origin of consciousness and mind. Although, if the Supreme being, the creator of the universe is real then he is the only source of accurate revelation. So, those to whom he wills to reveal will know and to the rest it will always remain a parable.

Expand full comment
Apr 19, 2023·edited Apr 19, 2023

Who's to say that embodiment is necessary? It's obvious why all natural intelligences are embodied - otherwise they would be useless in an evolutionary environment. But say we grant your point and embody AI in virtual, or robotic vessel. What then?

Consciousness is a great mystery, but it's hardly relevant to the question of whether our tools can grow complex enough to eclipse our ability to control. I do agree that we first of all should worry about AI as a tool that can be misused by our fellow humans, but this also doesn't give us grounds to dismiss AI risk.

Believe it or not, but I would like to find good arguments to dismiss AI doom cult, but struggle to do so, and you are certainly not doing very well so far. I think you are just out of your depth. Are you acquainted with main AI-doomer arguments, like Yudkowsky's (whom I despise) and other lessWrong/rat-sphere bloggers?

I would very much prefer to believe that I have a soul, but I have no evidence pointing in this direction, only wishful thinking. So far "meat robot" looks like the most plausible hypothesis. Would you like to argue otherwise? There's the miracle of consciousness, that science doesn't even know how to begin to explain; that's one of the few things that doesn't let me drown in nihilism completely. Though the jury is still out for me whether it's a miracle...or the greatest, most cruel curse for an intelligent being, if the materialistic framework that modern science reveals is basically correct.

Expand full comment

Some non-human animals seem to have at least some degree of consciousness, as e.g. experiments with apes and mirrors suggest.

The process of evolution (which you seem to acknowledge when describing the opposing thumbs feedback loop) would make it plausible (or at least possible) that consciousness comes in degrees.

Why would consciousness not be subject to evolutionary forces?

Expand full comment