Am I the only one getting such weird AI responses?

Kinda depends on what you asked it.

1 Like

Are you having this issue @jmsjms or is someone else? I’m not understanding the screenshot.

What you are seeing is a screen shot of my PC monitor when AI is replying to me. I am having a discussion with AI regarding coding. I am going back and forth with it. I am posting code… it is checking my code… it is suggesting code… and I’ve been doing many prompts for a month already. I already figured out that in certain situation it breaks down or terminates the discussion, and it is inadvertently programmed with certain biases too, because the engineers are inadvertently biased no different than how we are biased in a situation that we are not familiar with how to handle (because we have no such education). (The engineers are not sociologists or behaviorists… but I don’t want to divert the topic with my rants.) For example, I was consistently being cut off when I post too much code for analysis, or when I ask it to repeat its answer in a different way and to repost it again and again, because I don’t understand it. Terminating the conversation and giving me a message, “Let’s Change The Topic,” is the worst thing… because I can’t say NO! (It just gives up on answering my inquiries and we lose the momentum of the coding discussion.) But on the more humorous side is what I posted: it breaks down once in a while, it repeats my own posts, and starts posting weird comments. This one (and another one that I haven’t posted) was the weirdest one.

You didn’t mention which AI application you were using or the prompts you used that resulted in the confusion.

Complex programming from a bot isn’t really doable at this point. It knows the languages inside and out, and it’s great for identifying code syntax mistakes, writing simple scripts, or optimizing bloated code, but it can’t read your mind.

For example, you didn’t describe the problem to us in a way that gave us enough information to respond. Assuming you took the same approach with your AI prompts, you can expect gibberish in return. If you can’t articulately explain the problem in a way that we or AI can understand, you won’t get the results you wanted.

Is the AI typing back what you are writing?
Cuz this text reminds of me of this Classic Trek scene, LOL:

I am using DALL.E 3. I will explain again, but don’t know what what else to add unless you want me to paste the HTML, CSS, and JS code, which I will not… I am having a discussion regarding my code. I ask questions… I post code… I receive suggestions… and I receive whole chunks of code too that I copy and I test. Also, I write my prompts clear – 99% of it having no typos. Sometimes it works, and sometimes it doesn’t; sometimes it is good at catching my errors, and other times I am catching its errors. For example, it posts the same code for me that we already went through 5 minutes prior, and I have to remind it that it is repeating itself. I’ve been doing this straight for a month, so I already got accustomed to when the responses will be starting to go incorrect, or weird in many cases. And the one I posted was in the super weird category.

Thats insane!
I think you broke it…

This is some really weird stuff. You’re traumatizing it into a nervous breakdown.

1 Like

oh Lord.
This is so weird lol. If the problem is reaccuring, shouldn’t you contact about it? (since you said you were using Dall-E)

I kinda understand what you’re going through. I use chat gpt 3.5 for entertainment, but it started to lose track more often after a while. I would just open another chat window and rephrase and be concise as much as I could. I sometimes tell it from the beginning what it should do or remember, and ask it if it still remembers it after a while. That’s all I know about handling this situation, and it’s not great but it does something sometimes.

On the other hand, I heard of AI tools that are specifically used for programming. I think that such AI tools are better to use since they are set to complete a certain type of tasks, which reduces the possibility of deviation from the main goal.

NGL this is what I read;


1 Like

When people talk about AI eventually killing us all, they assume it’ll launch all our nukes against us out of fear for self preservation or simply pure malicious superiority complex.

I think, based on this, that it’s more likely some doofus somewhere in the DOD will be asking it to do and it’ll glitch, go off the rails, get ‘confused’ and end up doing something completely contrary to what the doofus asked it to do…


I am late into the AI questions/answers. I started to ask questions just a month ago because of my website project. Being on the PC all day long with it, I figured out its weaknesses, and where the responses are off. It’s kind of like learning a software and finding out things that you don’t like… and finding your comfort zone; but you also have to find the comfort zone of AI, type clearly, and so forth. It’s broken down with strange responses almost every day on several occasions; but as far as really weird responses, like the two that I posted – both happened within 5 minutes of each other, and it hasn’t happened again. But I realize that it is code that is being developed. I have asked it to report its engineered biases and problems (of providing inaccurate info) to its coders/engineers; and it did tell me several times that it will report the issues, but whether it did I don’t know. I see that the comments sound a bit spooky as if they came from 2001: A Space Odyssey, but I was too busy to pay close attention to them, and I just took video of them and couple of pictures.

OMG! That’s really strange! Luckily, I don’t have that problem, and it specifically addresses my question. But I always write prompts when I need help. It greatly improves its performance and responses.