- Save a large batch (hundred or thousands) of significantly mixed sizes (only AI used was enhance 4x) that cause magnitude variance in each time to process
- Use different batch tool to downscale output back to original size
- Objectively compare the upscaled/downscaled version with the original source, notice ~1% of images have abnormally high difference. Compare visually and realize it saved the neighboring image in the batch under the wrong name.
Suspect this failure is caused by the new (undocumented) pipelining of processing and multithreaded saves using tmp files, when a smaller image is processed the pipeline depth gets disrupted and causes overwriting by neighboring images in the batch.
Topaz Photo AI [v1.3.5] on Windows
Thanks for reporting this. Are you able to consistently reproduce this issue with some of those images?
Can you get some of the small images mixed with larger images so I can try to reproduce this?
You can securely submit your image(s) to my Dropbox using the link below. Please be sure to send me a note to let me know you sent something.
Dropbox File Request
No it requires a large dataset of mixed sizes of thousands images which is literally many gb of images all do not own copyright. is probably dependent on core threading usage load anyways as it’s a 1% failure rate not predictable. Seems to happen if there are #k images mixed in with ##k images. Programmer has made assumptions about saving pipeline adjustments that are not valid so they need to do a corner case study code review. It is why you need to design your own regression test with before after image compare.
I shared this thread with my team. Batch processing issues like this are hard to debug because they are not consistently reproducible and vary from batch to batch.
Hopefully your comment can help clarify some of the issue, but I don’t have a good workaround or solution at this time.