Generative Remove with Local Processing (November 2023)

Remove tool

We’re excited to release the new Remove tool into public beta, which gives you the ability to remove objects, distractions, and artifacts in your image while naturally filling in the surroundings.

This is the first generative removal tool that runs locally on your hardware, so you won’t be charged for usage or have your image transmitted to a remote server. With that in mind, we do highly recommend that you have at least 8GB dedicated graphics memory (VRAM), otherwise it will run very slowly on CPU.


For the highest quality, run Remove on smaller selections under 2,000px long-side. Using larger selections might degrade the visual quality of filled-in texture, so try breaking up larger selections into smaller chunks.


While the results are impressive in many cases, sometimes Remove will add unintended replacements to your object. Re-applying the removal with different settings - usually a smaller margin - will often fix the problem. Let us know when you run into this so we can fix more cases in future releases.


We’re excited about the potential of the Remove tool, but as a public beta it’s still relatively slow, may create odd results, and misses some features. We’re working on improving the tool by increasing speed, adding undo/redo, and improving integration with other filters. Please let us know what you think in the comments or the release thread.

Expanded Preferences

You can now heavily customize how you want Photo AI to behave, including:

  • Disable Autopilot to quickly start with no filters enabled
  • Select your preferred models or preferred strengths for different filters
  • Auto-close images after saving
  • Auto-resize to a certain scale, width, height, or longest edge

We hope this allows you to mold Photo AI into something that works best for your workflow. Your Autopilot preferences will also be applied to process images when you use the CLI.

More precise before/after comparison

Previously, you might see a significant preview pixel shift when toggling between the original vs processed version of your image:


This pixel shift is now fixed, which will make comparing your results much easier. Note that you may still see a very minor pixel shift when using Raw Remove Noise or Sharpen Strong.

Improved raw file handling

Previewing and exporting raw files will now use Adobe DNG SDK. This adds full-sized embedded previews to DNG files, improves preview consistency with export, and fixes many raw color issues. It also increases compatibility of exported DNGs with various other applications.

Other improvements

In addition to many smaller fixes, there’s been a few more notable improvements since the v2 release:

  • Paste images directly into Photo AI from the clipboard (Ctrl/⌘ + V).
  • Use the Quick Export button to save images with previous settings.
  • Sharpen Standard v2 no longer has a brightness shift on Mac, and will be selected by default over v1.
  • Use the Image Capture button to more easily share before/after results.
  • Improved performance when importing or exporting large batches of images, switching view modes, and displaying thumbnails
  • Fixed various issues related LRC and Photoshop plugin issues


We have some exciting developments coming up for Photo AI in the upcoming few months:

  • New Standard and High-Fidelity upscaling models that offers improved detail, fixes blurry patches, and improves quality
  • New selection tool that makes it significantly easier to mask objects
  • Improve right panel filters organization and workflow
  • Improved Raw Remove Noise default quality
  • Improve Autopilot consistency and decision-making
  • Improve batch processing stability and performance
  • Improve Adjust Lighting and Balance Color
  • Improve Removal tool (see above)

Thanks for using Photo AI! We’re looking forward to hearing your feedback, particularly on how useful you find the Remove tool.



I’m also a bit familiar with the generative AI functions by Skylum, already rolled out on production level.
These are working quite well already, but everything is running on some distant server.
Obviously threre a loads of images and picture elements the AI is choosing from.

If you are going to achieve comparable results on my local machine, your AI could take my own photo library as a source to train the algorithm to find a best match for erasing or replacing.

I’m pretty sure that almost all TOPAZ users would also have extensive libraries to train the AI with - locally.

Keep up the good work!




When training without a GPU server on individual GPUs, we are talking about weeks, not minutes or hours.

I don’t think that if many people have already complained that the tool is slow because the user’s hardware is slow, then they don’t need to think about training at all.

Also all images should be prepared for training 2048x2048px e.g.

1 Like

It all sounds rather easy to the non-programmer but it’s the little details that matter. :smile:

1 Like


If you do something like Topazlabs and want to raise it to a professional level, you need someone to select, evaluate and prepare the training data.

Then you need someone to write the software for the training.
Then you need the right hardware.
Then you train and finally check whether the target condition is correct.
If the result doesn’t fit, the parameters have to be changed and the training repeated.
A.I. is learning by doing.

I also have to say that AIs should not be trained with AI-generated data, as the error rate then increases and the result gets worse and worse.

I don’t know what the situation is with ChatGPT from OpenAI now that it is said that they have taught it to calculate (when calculating there is only one suitable result).

Sometimes, I need to remove objects using a few steps because the removal area exceeds 2000px. For instance, when people are standing near the subject, it becomes challenging to selectively remove just a part of the body. The AI tends to alter the face or other body parts instead of filling the area with the background content.

I would like to have the option to select the area for erasing and indicate which part of the image the AI generative fill should use as a reference (for example, by selecting a face and choosing nearby background content as the base - the AI should remove the face and replace it with elements similar to the chosen background).



I still try to get what did happen last week.

This is the best video i found so far.

To me its a kind of inspiration.


I have some whishes for feature versions. Like many others I have had issues with blurry patches and artefacts. Because of that I have not always been able to use the application. So my whishes is following:

  1. Fix basic functionallity before adding new features. That is what I bougth it for. Fix unsharp and noisy photos.

  2. Check that you are absolutely sure that nr. 1 is tested and completed.

When basic functionality is ok, and program is stable following features would be nice to have.

  1. Remove flare. Istead of using clone stamp or simular it would be nice to use a brush to magically remove flare and keep content as is.

  2. Improve out of focus sharpening, motion blur and improving sharpening from unsharp lenses.

  3. Recover blown out highligts and shadows.


When I first used Remove it worked ok, now a few updates later all I get is this. In this case a microphone partially obscured his face, and ‘Remove’ has left this coloured blob. Have tried it with many photos and get same results.

A very quick approach:

Not perfect but still quite OK.

Try with different mask sizes and different settings for erase/keep area, quality and padding.

1 Like

Thanks, I did post about this, though with a different photo, on the DPReview forum last week, and people there were all getting good results. So I’m thinking it could be a pc problem and hence trying this forum for a more expert view. I’ve tried different settings btw. If you have time, are you able to alter settings and reproduce the results I got?

I’d need the original for this.

Oh, and don’t upscale before removing things. Remove seems to work better on lower res images so it’s better to first do a remove pass and only then do the upscaling/sharpening/denoising afterwards in a second pass.

Sometimes it even yields better results if you deliberately shrink the image before using the remove tool and upscale it again after that.

Attached 42Mp original. Others have tried on the cropped photo though with good results, but at this stage I’m open for any guidance at all. Thanks for your time.

I have the original and uncropped photo, it’s 42Mp though, but it seems to have attached ok.

I tested it and it removed the microphone what I’d call well enough (still not perfect) even on the non cropped HR photo.
This was with default settings and erase area.

TPAI did however replace the microphone by some distorted another one once I set the „quality“ slider to the maximum.

So, better leave that in the middle or a bit more on the „speed“ side. The naming for this slider isn’t well chosen as quality quite often yields worse results than speed…

thank, I’ve tried all settings without success. I’m wondering if the minimum system spec for this latest version has changed. I’m running an i7-2600k Sandy Bridge processor that has AVX but not AVX2. Many have reported it works for them, which is just pointing to a problem my end, but still no clues what it could be.

I think you can get better result , if you paint over the whole microphone, include the cable and the stand.

Testing still, but i already like it more than the solution of PS.
The goal would simply be an appealing solution.


Selection of hair by single strokes.

Selection of a broad band of hair and head.


removed - its So nice!

Needs some tuning, of course, because it expands the hair and thus changes the shape of the head, but it’s so useful.

I would have liked to have had something like this 10 years earlier.

output - just 2x removed - the next step could be removing the tree on her head - but Remove isnt able to.