Do you fear Artificial Intelligence?

Three things:

  1. I’m jealous.
  2. Um, bummer, I guess?
  3. I’m jealous.

I think that might be a different conversation, but I agree.
Human emotions/behavior/through processes are determined by more than just the brain or electronic equivalent, whatever that may be.

We are influenced by many more variables than we could imagine throughout our lives and factors within our own bodies (hormones/gut flora) that determine our state of consciousness and how we perceive the world.

At some point, Ai will reach a level of “perfection” that humans would never be able to reach, and to attempt make their consciousness more like ours would require purposefully programming flaws into them.

At most they would be able to simulate responses in a way human would. “This is how a human would react/feel in this situation - let me react in the same way”.

It’s indeed fascinating how our understanding of consciousness lags behind advancements in AI. AI models aren’t replicas of human consciousness, and their utility lies in their unique capabilities rather than mimicking human thought processes.

1 Like

Yes, and that’s already happening with today’s large language model (LLM) AIs that understand our written languages and respond in kind with simulated politeness and friendly comments. As impressive as LLMs are at parsing enormous amounts of data, cross-referencing, identifying patterns within the data, and reassembling and summarizing that data into coherent text, much of the astonishment people have for these LLMs is the illusion of them seeming human.

LLMs and image-generation AIs will get better at what they do. Even so, their abilities will be limited by their programming and training data. As you mentioned, they don’t experience emotions, they don’t perceive the world firsthand, and they’re not biological organisms.

Looking ahead, I think AI’s most promising and perilous uses will involve solving complex problems that have eluded us. For instance, we could supply an advanced AI with all our knowledge about cancer and have it explore all possible avenues for cures or areas of research. However, the same AI could also be misused to develop recipes for biological warfare and terrorism.

I don’t think the first serious or existential AI dangers we face will involve AI replacing us or crossing a boundary into some form of consciousness. Instead, the more immediate danger will be how we use these new tools.

1 Like

Thanks for the article, was an interesting read :honeybee:

It’s such an interesting thing to contemplate… as to whether the astronomical number of calculations that would be involved to make a computer appear to emulate a consciousness of a human and then at that point whether it would really be consciousness or just lots of code running on a powerful computer and doing what it’s told :thinking:

I think the truth is, we don’t know enough about what consciousness really is or where it comes from to be able to create one.

Would it be more convincing if instead of code running on solder, wires and silicone, it was made out of electrical impulses running on organic compounds? Like human stem cells implanted in a potato that become reactive to jolts of electricity that you can read/write data to?

What about if we replace part of a human brain with chips that replace that part of the brain’s functionality? If we repeat this step continuously, at what point would you consider the person no longer a conscious being?


That’s such an interesting thought. I wonder whether that is possible and if so, whether the consciousness would cease to exist at certain point in the way that you and I know it and all that is left is a computer running on the residual data.

1 Like

Most religions contend that a metaphysical soul is the seat of consciousness.

However, if we remove supernatural religious beliefs from the equation, consciousness must arise from the organized electrical and chemical processes that a physical brain’s neurons use to process data.

A biological brain is physically different from silicon semiconductors. But are the results fundamentally different in kind or degree? Does the difference have more to do with complexity, programming, and data processing capacity, or is there something fundamentally different between a biological brain and a computer processor that makes consciousness possible in one but not the other?

Our brains’ unique wiring makes our consciousness human. A computer won’t be human, but a good argument exists that complex computer circuitry and programming might eventually cross the boundary into some form of non-human self-awareness or consciousness that’s just as real as our own.


Consciouness is so mysterious and elusive, I wonder whether all living things experience it in some way, shape or form, or that at a certain point (like plants or single celled organism), they are just alive in a way thats similar to a person being in a coma and they just follow a process :thinking:


What makes folks assume that consciousness is an emergent phenomena? Is it because we haven’t found an organic “seat” of consciousness?

Imagine if you had a group of engineers with an unlimited budget and their project was to create an android which could think for itself. They might eventually come up with a robot which could think and perform all the tasks we associate with an intelligent being like for example recognising a particular human being from among the many similar models it was exposed to, and it might be able to examine it’s own thought processes to determine which factors were involved in that recognition. Nothing would be mysterious and it would be able to ascertain exactly what factors were involved in the recognition, what factors were involved in recognising ‘John’ from all the other humans it was exposed to.

Now imagine a second group of engineers with a much more limited budget, they might cut the ability to examine it’s own thought processes to save money and thus the robot would still be able to recognise particular humans but it would not be aware of how this happened. When the robot met ‘John’ the recognition would just pop into it’s head fully formed. The robot would have to come up with some concept of how this happened, it might come up with the concept of ‘consciousness’ and from that it would decide ‘I am a conscious being’.

Evolution has the tightest budget of all, any unnecessary features are deleted from newer models. And so we have no mechanism for examining our inner workings. We have invented the concept of consciousness to explain this gap. We are the second robot.

1 Like

Yeah, probably. That’s a good question, but what else could it be? There doesn’t appear to be a specific consciousness organ, which leaves the probability of it being an emergent property of the brain’s complex workings — especially the cerebral cortex.

Then again, cephalopods (octopuses, squids, and cuttlefish) lack a cerebral cortex and have a strange decentralized brain. For example, each arm of an octopus has a mini-brain of sorts that controls that arm. Despite the extreme neural differences, cephalopods appear fully conscious and exhibit signs of self-recognition and considerable intelligence.

There’s also the article I linked to in my first post, which describes how neurobiologists are beginning to believe that insects have a form of consciousness despite their extremely limited neural circuitry.

Everything considered, I suspect consciousness is an emergent property of our nervous system that natural selection determined was useful since it confers a single, organism-wide executive function. It’s just a guess, though, since I lack a better explanation. Any other explanation tends to lead to metaphysical conjectures.

1 Like

The budget is tight in all ways except time, which has supplied three or four billion years’ worth of fiddling, tweaking, revisions, and optimizing in the most inefficient way imaginable — random trial and error.

1 Like

Some people believe the octopus is an alien species and didn’t evolve on Earth, but landed on Earth, and continued evolving.

Reason being it’s completely unlike any thing else in terms of make up

How true that is… I don’t know

Have you seen the Netflix documentary, My Octopus Teacher?

It documents a filmaker who became fascinated with an octopus while diving. Over the course of many months, they befriended each other. They formed a mutual bond that gave every indication of the octopus being sentient, curious, surprisingly intelligent, and valuing the trust and friendship it had formed with the researcher.

I welcome our new octopus overlords

The way I understand this theory is that- every lineage of life increases in complexity at a reasonably steady pace (think technology and Moore’s law). This has been calculated down to a mathematical formula. For instance, if you go back 500 million years ago, life should be at x level of complexity. The octopus doesn’t fit into that formula very well

This was in the paper the other day.
“Come see what I found for you!”

Nope, what has to happen will just happen