I’ve seen a lot of new versions now, many with new features.
But with every new change and feature the software complexity rises.
In many releases in the past functionality that previously worked suddenly stopped working again and had to be fixed once more (as new features required alterations and not all parts of the effected code were thought of).
What I propose is that you create a solid integration test base.
At least one test for each of the core functionalities in Photo AI.
I assume you have some sort of a feature freeze point of time in the week when you focus on bugfixes. And at that time you run all integration tests again (or even on every checkin as part of CI). And then you will see in advance, which core functionality no longer works as expected.
And a new release is only built if the integration tests are green.
The more functionality you add, the more tests you add.
Using the CLI is a wonderful way of commanding Photo AI to do exactly that one thing you want to test in each integration test. You have already layed out the basics for it with the new 2.0.0 features (autopilot prefs, specifying a specific model that the autopilot is forced to use, currently via registry values; later hopefully via direct CLI command parameters).
It should be pretty easy to write tests for various purposes this way.
Simple examples:
-
Integration test for a working CLI
One test executing the tpai.exe with a predefined image input and an output. And then checkin that the exe process finishes with exit code 0 and the the result image meets some sort of expected output. [>0 bytes, or even better is it loadable as an image, does it have the desired target dimensions, and so on] -
Integration tests for the CGI upscaling model
Grab two pictures with cgi content. One which the autopilot already recognizes as cgi.
And one that it does’t recognize as cgi (and uses some other model on it).
Then write 2 tests for each of the 2 files:
a) Alter the autopilot prefs to upscale auto. Run the test file through the cli. Check the result (it’s up to you how far you want to go in terms of what you check in the result; just dimensions, content, whatever; anything tested is better than nothing at all). Since the CLI also returns all detected autopilot values as JSON you can also check expected values against that result.
b) Alter the autopilot prefs to upscale cgi. Run the test file through the cli and check results. -
Integration tests for face detection
Much like described in No. 2, but with images containing faces (1, 2,3, 25, 50, 100) to make sure that Photo AI (still) processes these images correctly. [Unlike in Photo AI 2.0.0, which now crashes with images containing many faces]
…and so on for all core models and functionality, some parameters,…
And also different input formats (raw, non-raw, …)
If you follow this through and enhance your test over time you will ensure that every future release will be a very good release with the previous features still working. Of course integration tests go along side existing unit tests. But they have the charm of testing the entirety of the image processing workflow. From loading to output, with everything in between.
There is also an added bonus. You can setup build servers on different machines (with different hardware, e.g. graphic cards or driver versions of graphic cards). And each integration test run can automatically be run on all these machines simultaneously. Giving you a unique insight into Photo AI functionality still working on different graphic cards and/or drivers.
Such an approach would have caught the Photo AI NVIDIA driver crash issue prior to release.