Do you fear Artificial Intelligence?

  • Definitely, it will eventually destroy us :skull:
  • I think it has the potential to destroy us :face_with_diagonal_mouth:
  • Nah, relax… it’ll be alright :call_me_hand:

0 voters

Was having a chat with some mates recently about AI, namely whether or not it poses a threat to humanity in the way it’s portrayed in movies like Terminator or Transcendence, what are your thoughts on the matter?

I think anything positive made to advance mankind will be weaponised and used for war.

It’s happened so many times before.

It’s only a matter of time before something goes wrong.


I don’t think AI will ‘destroy’ us, but as with any new technology, there will be good and bad to come from its use.

We have already seen proof of the theory that the first uses of any new tech are misinformation and porn.

One thing is for sure, things will not be same. It’s a watershed moment, a real step change, a paradigm shift (I’ll stop trying to sound like an AI now).

1 Like

The simple algorithmic programming responses of today’s AI isn’t AI.
Machine learning, right now, is all about learning patterns. The machines aren’t “learning,” they are building patterns. The true trouble comes when the machines can learn on their own and change their own programming.

We aren’t there yet. But because humans like things simple and easy and don’t care what information they give up in the process, they keep building smarter machines and connecting them together. There’s no stopping that. Quantum computing is here. And NO ONE is ready for the havoc that will cause once it is online and especially in the hands of bad actors.


Yes. I’ve seen the films.

1 Like

I would have checked both


I agree with PrintDriver. Today’s AI isn’t genuinely intelligent; it can’t do anything outside of the narrow capabilities of what it was programmed to do, which is to recognize and cross-reference enormous amounts of data to identify and organize patterns within the narrow scope of its programming.

AGI (artificial general intelligence) will be another matter. Its programming won’t impose narrow limits on its abilities. It will learn and improve its capabilities similarly to how an infant’s intellect gradually matures, learns, soaks up information, and begins to make its own decisions, pursue its interests, and determine its destiny within the constraints of its available options. Unlike humans, predetermined biology won’t be a constraint for AGI. Processing capacity will continue to improve, and AGI will likely be able to tweak and reprogram itself as it sees fit to some extent.

I’ve read in various places that AGI is anywhere from 5 to 50 years away. There’s also the question of consciousness—no one knows for sure what it is or how it arises. At this point, I think it’s anyone’s guess whether AGI will become self-aware and develop consciousness.

That’s one of the few certainties regarding where this whole thing is headed. :frowning_face:

This is the third article in a series on Quantum computers.
Those will be the next breakthrough in processing capacity. They are here now. IBM and Google have them at least functional enough. Others out there as well.
They will make today’s computers obsolete overnight (at least from a supercomputer standpoint.)

Fear? No.

Concern? Yes.

Didn’t vote since I didn’t feel there was a response for me. If there would have been a choice between the second and third options, that probably would have been me.


I don’t fear the technology of AI.
I fear those who use it with ill intent.

1 Like

The ill intent is already there. My understanding is that OpenAI and Alphabet have both scraped the internet and YouTube, knowingly violating copyright law, for content to train these voracious things… but it’s ok because “there was just no other way to do it”. It was never going to be an ethical project.

Source for that is NY Times reporting: ‎The Daily: A.I.’s Original Sin on Apple Podcasts


It’s already starting

1 Like

I will come at this from a different perspective. For the last 4-5 years I’ve been working as a software engineer with a heavy focus on AI advancement. The company I work for is a "non profit in a niche sector that deals with government data and various types of vehicles and structures, so I can’t say a whole lot.

I can say that it 100% has the potential to destroy us. I don’t think it will go as far as destroying us as a species, but as a society that we have created. I also think the opposite, and that it has the ability to solve a lot of the worlds problems, if used correctly. There are people who are aware of this and have measures in place to hopefully give us time to adapt.

The AI technology that is publicly/commercially available is nowhere near the level of what has developed behind closed doors. There’s a limit one what can be released and when. For instance, the image/music/video generation, AI text responses that all seem so recent, are basically just toys even compared to what has been developed


Appreciate that the current technology that’s being developed is definitely being used for bad reasons in a few places in the world.

Personally I think we (as humans) have a tendancy to personify AI (like those movies). Assuming that we ever did create an actual AI that was capable of thinking in a similar same way as you and I, why would it care about taking over or occupying the physical space we live in?

That all said, I don’t think there will ever be an artificial consciousness in the same way that you and I think, eventually they might be able to get something close to similulate it, but that’s all it’ll ever be.

1 Like

Not that it couldn’t have a tech glitch or anything…


Oh and Boston Dynamics has retired their Atlas Robot.

Meet the new AI-enhanced Atlas.


Yes, I think that’s true. People imagine AI and robots that will be like us, even though there’s no reason to assume that will be the case unless we specifically design them to simulate us. One thing that worries me is our blindness regarding what these AI technologies will give rise to—both good and bad. Anticipating what we’ve never encountered before is difficult.

This reminds me of something I read the other day. Many scientists are beginning to think that all animals, from humans to insects, possess varying degrees of consciousness and self-awareness.

No one knows for sure what gives rise to consciousness (setting aside religious beliefs). But if it’s not a uniquely human, primate, or mammalian attribute, it seems entirely possible that, at some point, AI software and the enormous computing capacity behind it will cross the boundary into self-awareness. The problem is that no one really knows.


Just saw this today and reminded me of this thread :wink:


I think that part is fascinating… how little we understand about our own consciousness, how slowly commonly accepted ideas in neuroscience trickle into popular culture and yet, we are building AI models that folks believe are like us. They simply can’t be and I’m not sure they would have as much utility if they were.


Three things:

  1. I’m retired.
  2. I don’t have too many good years left to be concerned by all this.
  3. I’m retired.