Hello… I only use cloud rendering for generative AI.
I started with a more specific issue. That is, I would take a source image and use the prompt “african american woman”. And I would get the type of results I expected. Every person, and sometimes random objects in the picture, would be rendered as an African American Woman. I noticed that didn’t work anymore since I got version 1.0.7. The person who responded to my email said they also were not able to get results anything like the sample images I shared.
But I don’t know if it’s just certain prompts that don’t work anymore because I can’t get any prompt to work at all. It doesn’t matter if I type “asian woman”, “wooden box, “green shirt”, or nothing at all. The only result I ever get is the same default result I would get with no prompt at all. At one point I used three different prompts for the same image and got back three files that were identical. Every bit and byte the same. I just can’t use cloud rendering anymore.
And I can’t render locally because my computer has a mobile GTX 1050 with 4GB of video ram. Everything was working fine like, last week. I was told there were no other reports of prompts not working anymore. Am I the only person who can’t render in the cloud? Is there a reason?
I think they may have changed the cloud version of Redefine at some point. I’ve never used Redefine at high creativity, but I do remember the artsy images that users produced with it.
The current cloud version of Redefine Creative is very restrained by comparison, and adheres closer to the original image (like you, I can’t use the local version). The Bloom web app can go much further.
Other users or the devs could probably give more insight.
I have noted the comment that this is a confirmed issue, I hope can be fixed because the same is happening to me after the 1.0.7 update, when I send to cloud render images it seems that it does not matter what model settings or description I use the results are always the same, I was able to test local render in my machine and there it seems that works better not much better but the result images look that tried to use my settings and descriptions, at least with the settings selected I can see a slightly different results but in cloud render the image results are always the same as the input image, it seems that in the cloud the process is not using the model,paramenters or descriptions that I define.
The results should not be the same as the input however. I agree that the image description seems to be ignored when using cloud rendering, but it should still be different than the original. Can you share an example where the output is the same as the original?
We have confirmed in our testing with the developers that the Image Description is re-hooked up and now informing the model when rendering in the cloud.
But we notice that the results from Cloud render are far less impacted by the Image Description than a local render is.
At this time we recommend using local rendering is guidance by the Image Description is really important to the results.