Hello everyone,
I’m curious to know if any of you here have experimented with using AI tools (for example chatbots like ChatGPT, image-generation tools, or other intelligent assistants) to take an existing 3D visualisation (model, scene, render) and modify or enhance it via AI.
Specifically:
- What kind of 3D visualisations were you working with (architectural renders, product visualisations, game/film assets, etc)?
- Which AI tool(s) did you use and how did you integrate them into your workflow?
- Did you ask the AI to change lighting, camera angle, textures/materials, composition, or completely reinterpret the scene?
- What were the results — what worked well, what was frustrating or didn’t meet expectations?
- How did you handle the hand-off between the 3D tool (e.g. Blender, 3ds Max, Cinema 4D) and the AI?
- Any advice for someone who’s thinking of trying this for the first time (tool choice, prompt writing, legal/copyright issues, time saved or extra work introduced)?
I believe this kind of hybrid workflow (3D + AI) is still fairly new in many design/visualisation circles, and I’d love to hear firsthand experiences — successes, failures, surprises. If you’ve got any before/after examples or screenshots you’re willing to share, that’d be great too.
Thanks in advance for sharing your thoughts and experiences!