Direction of v3 development

I’m somewhat concerned about the progress and direction of development for the v3 series. It seems most of the alpha releases have been centred around stabilization. While I’m not opposed to having a stabilization function, it shouldn’t be the highest priority feature being worked on. I’d be happy if it were just a code stub for now so it can be added later.

@suraj had noted some weeks ago that we would be seeing regular updates to the alphas, possibly even weekly releases. Instead we are seeing very infrequent releases with almost no information about what is being worked on.

As a customer I’d prefer to see the development focus on building the core application with the features we asked for, and to see the v3 series do things better than the v2 series (performance and model/output quality).

Thanks

8 Likes

:sweat_smile:

VEAI Feature Request Ranking: (Top 6 out of 140 requests)

  1. Pause/Resume
  2. H265
  3. Face Enhancement
  4. Copy Audio
  5. Linux
  6. Stabilization :tada:

2022-06-09_023614

3 Likes

I stand corrected. However I would rather see the core functions of VEAI be improved before we see new features like stabilization introduced. And the rest of my post is still relevant.

4 Likes

I agree. Development, or at least its communication here, really comes off as unfocused to put it mildly.

I can’t say much about video stabilization, 'cause I haven’t used it yet. But for me, I prefer when the Topaz team use the time on development instead release a new version more frequently. I mean v3 is still in alpha state. For most of the software you can’t even get your hans on an alpha version. And as far as I have seen since I use VEAI, when there is a major issue a fix is release really fast, for a release version of course. Nevertheless, more testing of this alpha means also a more solid product but it makes no sense release an new version without much changes. My opinion :slight_smile:

3 Likes

The veai 2 was very better than v3 sincerly for the moment for me.

There seem to be a couple main complaints here: the alpha release schedule (and communication about it), and the desire to see certain features before others.

To be clear, they are under no obligation to provide us early access to these alpha builds. I understand the desire to get more frequent updates, but it’s really just looking a gift horse in the mouth. I’m thankful for the opportunity to kick the tires, regardless of how much time passes between each update.

Product development is messy. Estimating how long things will take to build is one of the hardest things about it. If we’re honest, as far as the alpha releases are concerned, they really don’t owe us more frequent updates, they don’t owe us communication, and they don’t owe us timelines. Alpha and beta releases need to be held to a different standard than typical releases. @suraj was kind enough to provide an optimistic release schedule — an estimate, not a promise — so it’s really uncool and frankly a bit astounding to see it being used against them.

Also, it’s important to remember that UI work and new capabilities (like stabilization) are completely different jobs, and are typically handled by different people. Work on stabilization should have next to no impact on the interface rewrite timeline. These are separate things, and we shouldn’t conflate the two.

Yes, we each have features we’d personally like to see come before others, but the team has to juggle business needs against user needs, in aggregate. I personally believe the new v3 UI is going to provide a better, more flexible platform for handling a wide range of improvements. They should be able to get more features built more quickly as a result, including the shiny features at the top of our own personal lists. (Remember that if there are features you want to see, they’ve provided a place for requesting them.)

I’m looking forward to seeing more issues from the v2 UI get fixed in v3, and in the meantime, I’m grateful to have access to early alpha builds, and appreciate having the ability to kick the tires on stabilization at the same time. It can be very frustrating to be on a product team and get complaints like these about early builds — so much so, that I’ve seen teams cancel their early access programs as a result. Let’s try to appreciate the early access, provide feedback on the changes they make, request features in the appropriate place, and try to avoid giving them a hard time for not giving us the communication or updates we personally want on the timeline we want them.

1 Like

While I agree with the gist of your comment, the fact is they created expectations and when people question it we aren’t ‘using it against them’ we are simply asking them to follow up on what they said. If they want to make the alpha closed then I’m fine with that, at least then I will know where things stand.

Anyway today they released an update!

Yeah I think if they were to add anything new it would be a model to reduce blurring caused when a video’s FPS was set too low for the motion being captured when it was originally filmed, basically a “what if this was filmed at 60FPS instead of 30FPS” option… (chronos doesn’t solve this, it works well if the motion was already slow enough to not cause a lot of motion-blurring or is animated content)

I haven’t been on the forum in the past month or more. So I didn’t get to respond to this post. If it is still relevant. As far as the development rate is concerned, sometimes problems are discovered and deadlines slip. But I feel we have picked up pace in the past few weeks and hopefully we can maintain it.

There are multiple people working on different aspects of the application. There are different models in training, the stabilization one you guys know of, then there are others, including addressing motion blur. Model training takes a long time and is independent of the app development.

The GUI devs are working on the application GUI and in recent weeks have the new GUI appearance changes. With multiple people working on similar areas there are bound to be issues and that is why the app is still in alpha.

I feel barring major video player issues, most of the app is in stable condition and getting closer to beta/release levels.

As far as the feature list is concerned it is just for understanding user priorities. The feasible requested features will be added in once we have a stable foundation, which I believe we are getting there.

7 Likes

Hi @suraj , soon new or update of models ? Thanks

1 Like

They will be new models, all our existing models are trained to capacity.

6 Likes

Hi Suraj, hope you add deep fake model to enhance videos, which can gather information about faces shapes in videos and clothes, floors, hair,skin, trees…etc and model trained by the perfect shape of face and other objects in the whole video to enhance them instead of faking other person photo on original face, because in some low quality videos may be we have a few good frames for face, clothes, hair ,skin,etc… wich we can mark for the model to be trained with and make them as a reference for the deepfake model, someone on the topaz FB group said there are a program in github enhance low quality photos by feed the program with all available photos for the person and the program AI learn from them and use this data to enhance the photo.

1 Like

will we have different models in the timeline, like different proteus versions for certain frame intervals? and proteus + gaia, etc. Also is there a model in the works or a newer proteus version that can detect banding?

What about the gpu utilization bottleneck?

3 Likes

@suraj this ones

Since we moved to multiple processes, the GPU utilization should be high. Performance needs to be tuned and improved, that will be the focus once we get to later betas.

Currently, there is no way to use multiple models on the same video at different locations. Once we add scenes support, you should be able to do that, but it might be a 3.2 or later feature.

1 Like

You can however cut your video or split it to frames, then run different models on different scenes & rejoin again after.

Toolnix will only cut at keyframes so if it’s something you’re in love with and want it done perfect it’s best to start by splitting your video to frames with ffmpeg or similar.