Can Topaz Video AI "improve" iPhone-captured "spatial/stereoscopic" video in any way

Even though I have zero technical knowledge about these questions, I have two general questions about iPhone15Pro-captured “spatial” video, and whether Topaz AI can do anything to “improve” that video. The questions deal with:

  1. whether Topaz could improve the “lens separation” on the iPhone, by somehow, in software, creating a greater separation between the iPhone’s two capturing lenses (which are very close together). I suspect this is totally impossible, probably optically impossible, so no need to educate me, either don’t respond or simply say “impossible!”

  2. the second question is whether Topaz could “sharpen” the captured video in any way, so that if the video is played on a lower-resolution output device (eg the Quest3 instead of the Apple Vision Pro, since the Quest3 has lower resolution video than the AVP). Again, if this is a very dumb question, no need to respond.

BUT since the Quest3 device costs $500 and the AVP is $3,500, it would be interesting if the perceived quality of viewing a video on the Q3 could be significantly improved by Topaz’s VideoAI.

Video AI is able to import and process 360° videos, our models are not primarily trained on equirectangular footage and we cannot guarantee 100% compatibility when used with VR video inputs.

For the same reason, we also cannot guarantee that 3D SBS inputs will preserve their 3D alignment when processed. Several users have reported excellent results when using Video AI for 360° and 3D video, but we do not consider these videos an “officially” supported use case just yet.You will likely need to re-inject 360° metadata into the output file after processing. This tool will re-write new 360° metadata on the upscaled video.

The first part, I believe, is impossible at this time frame.