Welcome to the documentation page!
Here you can find some information about demos.
Standard demo
The standard demo allows you to test all successfully exported models.
The interface graphics allows you to choose: the type (Refine, Base, Refine Custom, Base Custom); there
Backbones (MobileNetV2, ResNet-50, ResNet-101); the precision of floating point weights
(16-bit, 32-bit); the resolution of the inputs (NHD, HD, FHD, 4K); the framework (Tensor-
Flow Lite, TensorFlow.js, ONNX Runtime Web); the execution provider (WebAssembly,
WebGL); the ONNX version simplified via onnx-simplifier; the TensorFlow version
native, obtained by direct conversion.
When a combination of these is chosen, some options are automatically excluded.
Hovering over these with the pointer, it will appear a small text box explaining why.
Thus, the following are envisaged conditions:
- The preferred ONNX simplified version option is not available when yes use the TensorFlow framework or TensorFlow Lite, due to negative padding that are introduced.
- No 16-bit ONNX graphs have been implemented.
- Refinement networks are not compatible with indirect conversion from PyTorch towards TensorFlow due to explicit use of the padding operator. They are instead both executable in TensorFlow.js in the native version.
- Execution on GPU is not possible in ONNX Runtime Web due to the large number of unsupported kernels yet.
- There is no conversion of native TensorFlow graphs to ONNX Runtime Web. Given the increased WebGL support offered by TensorFlow.js there is indeed none valid reason for doing so.
- Inference in WebAssembly with TensorFlow.js is impossible for all exported models from PyTorch. Indeed, the various translations of the model cause the presence of the SparseToDense kernel, currently not implemented.
- While there are delegated WebGL accelerators for TensorFlow Lite on iOS and Android, the same is not true on browsers today. The only execution provider available becomes WebAssembly in that case.
- Indirect export from PyTorch to TensorFlow Lite produces non-executing graphs driveable. The interpreter is in fact enabled only for networks converted directly from TensorFlow, except for the Refine. Since the latter is exported in sampling mode according to the state of the art, inference is impossible due to kernels native FlexCropAndResize and FlexTranspose.
- Since the set of Custom models is the result of the trainings, the only selectable Backbone is the MobileNetV2.
The application allows the user to upload images from their device and
apply Background Matting on them via a button at the bottom. The new wallpaper
on which to compose the result can be a uniform color or a chosen file, and it is possible
establish the level of transparency. This can also be done after the inference, in order to
visually evaluate the result as the background changes. For each resolution, there are ready-made inputs
to be used, if you don't have your own available.
Finally, all the visible figures can be downloaded by clicking on the appropriate icon at the top right.
Live demo
Unlike the standard demo, the live variant offers significantly fewer options available.
The configurations are in fact limited to the Custom networks executable in TensorFlow.js
in WebGL on NHD and HD input resolutions input, while the customizations remain unchanged
new background. Taking into account the performance analysis, which we will discuss later,
it is of no use to offer the user additional settings.
After enabling access to the device's camera from the home screen, it is necessary to capture
a background in order to apply the Matting.
This can be done in two ways: ONE-SHOT and LIVE.
From a functional point of view, the first is identical
to what happens in the standard demo, while from an implementation point of view, the processing
and visualization of the result take place on the GPU. It is of paramount importance to the second,
where the images coming from the camera they are continually used as input to subsequent inferences.