Merge pull request #24 from glucauze/v1.2.1

v1.2.1 experimental gpu option
main
Tran Xen 2 years ago committed by GitHub
commit 55b845c666
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,3 +1,7 @@
# 1.2.1 :
Add GPU support option : see https://github.com/glucauze/sd-webui-faceswaplab/pull/24
# 1.2.0 :
This version changes quite a few things.
@ -18,10 +22,14 @@ Bug fixes :
In terms of the API, it is now possible to create a remote checkpoint and use it in units. See the example in client_api or the tests in the tests directory.
See https://github.com/glucauze/sd-webui-faceswaplab/pull/19
# 1.1.2 :
+ Switch face checkpoint format from pkl to safetensors
See https://github.com/glucauze/sd-webui-faceswaplab/pull/4
## 1.1.1 :
+ Add settings for default inpainting prompts

@ -1,5 +1,5 @@
numpy==1.25.1
Pillow==10.0.0
pydantic==1.10.9
Requests==2.31.0
safetensors==0.3.1
numpy
Pillow
pydantic
Requests
safetensors>=0.3.1

@ -16,6 +16,7 @@ gem "github-pages", "~> 228", group: :jekyll_plugins
group :jekyll_plugins do
gem "webrick"
gem 'jekyll-toc'
end
# Windows and JRuby does not include zoneinfo files, so bundle the tzinfo-data gem

@ -190,6 +190,9 @@ GEM
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.3)
jekyll (>= 3.3, < 5.0)
jekyll-toc (0.18.0)
jekyll (>= 3.9)
nokogiri (~> 1.12)
jekyll-watch (2.2.1)
listen (~> 3.0)
jemoji (0.12.0)
@ -256,6 +259,7 @@ DEPENDENCIES
github-pages (~> 228)
http_parser.rb (~> 0.6.0)
jekyll (~> 3.9.3)
jekyll-toc
minima (~> 2.5.1)
tzinfo (>= 1, < 3)
tzinfo-data

@ -37,6 +37,9 @@ author:
minima:
skin: dark
plugins:
- jekyll-toc
# Exclude from processing.
# The following items will not be processed, by default.
# Any item listed under the `exclude:` key here will be automatically added to

@ -0,0 +1,14 @@
---
layout: default
---
<article class="post">
<header class="post-header">
<h1 class="post-title">{{ page.title | escape }}</h1>
</header>
<div class="post-content">
{{ content | toc }}
</div>
</article>

@ -2,9 +2,22 @@
layout: page
title: Documentation
permalink: /doc/
toc: true
---
# Main Interface
## TLDR: I Just Want Good Results:
1. Put a face in the reference.
2. Select a face number.
3. Select "Enable."
4. Select "CodeFormer" in global Post-Processing.
Once you're happy with some results but want to improve, the next steps are to:
+ Use advanced settings in face units (which are not as complex as they might seem, it's basically fine tuning post-processing for each faces).
+ Use pre/post inpainting to tweak the image a bit for more natural results.
## Main Interface
Here is the interface for FaceSwap Lab. It is available in the form of an accordion in both img2img and txt2img.
@ -12,7 +25,7 @@ You can configure several units, each allowing you to replace a face. Here, 3 un
![](/assets/images/doc_mi.png)
#### Face Unit
### Face Unit
The first thing to do is to activate the unit with **'enable'** if you want to use it.
@ -25,7 +38,7 @@ Here are the main options for configuring a unit:
**You must always have at least one reference face OR a checkpoint. If both are selected, the checkpoint will be used and the reference ignored.**
#### Similarity
### Similarity
Always check for errors in the SD console. In particular, the absence of a reference face or a checkpoint can trigger errors.
@ -37,7 +50,7 @@ Always check for errors in the SD console. In particular, the absence of a refer
+ **Same gender:** the gender of the source face will be determined and only faces of the same gender will be considered.
+ **Sort by size:** faces will be sorted from largest to smallest.
#### Pre-Inpainting :
### Pre-Inpainting
This part is applied BEFORE face swapping and only on matching faces.
@ -47,7 +60,7 @@ You can use a specific model for the replacement, different from the model used
For inpainting to be active, denoising must be greater than 0 and the Inpainting When option must be set to:
#### Post-Processing & Advanced Masks Options : (upscaled inswapper)
### Post-Processing & Advanced Masks Options : (upscaled inswapper)
By default, these settings are disabled, but you can use the global settings to modify the default behavior. These options are called "Default Upscaled swapper..."
@ -59,13 +72,13 @@ The purpose of this feature is to enhance the quality of the face in the final i
The upscaled inswapper is disabled by default. It can be enabled in the sd options. Understanding the various steps helps explain why results may be unsatisfactory and how to address this issue.
+ **upscaler** : LDSR if None. The LDSR option generally gives the best results but at the expense of a lot of computational time. You should test other models to form an opinion. The 003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN model seems to give good results in a reasonable amount of time. It's not possible to disable upscaling, but it is possible to choose LANCZOS for speed if Codeformer is enabled in the upscaled inswapper. The result is generally satisfactory.
+ **upscaler** : LDSR if None. The LDSR option generally gives the best results but at the expense of a lot of computational time. You should test other models to form an opinion. The [003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN](https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth) model seems to give good results in a reasonable amount of time. It's not possible to disable upscaling, but it is possible to choose LANCZOS for speed if Codeformer is enabled in the upscaled inswapper. The result is generally satisfactory. You can check [here for an upscaler database](https://upscale.wiki/wiki/Model_Database) and [here for some comparison](https://phhofm.github.io/upscale/favorites.html). It is a test and try process.
+ **restorer** : The face restorer to be used if necessary. Codeformer generally gives good results.
+ **sharpening** can provide more natural results, but it may also add artifacts. The same goes for **color correction**. By default, these options are set to False.
+ **improved mask:** The segmentation mask for the upscaled swapper is designed to avoid the square mask and prevent degradation of the non-face parts of the image. It is based on the Codeformer implementation. If "Use improved segmented mask (use pastenet to mask only the face)" and "upscaled inswapper" are checked in the settings, the mask will only cover the face, and will not be squared. However, depending on the image, this might introduce different types of problems such as artifacts on the border of the face.
+ **erosion factor:** it is possible to adjust the mask erosion parameters using the erosion settings. The higher this setting is, the more the mask is reduced.
#### Post-Inpainting :
### Post-Inpainting
This part is applied AFTER face swapping and only on matching faces.
@ -122,7 +135,7 @@ The checkpoint can then be used in the main interface (use refresh button)
## Processing order:
## Processing order
The extension is activated after all other extensions have been processed. During the execution, several steps take place.
@ -157,10 +170,56 @@ The API is documented in the FaceSwapLab tags in the http://localhost:7860/docs
You don't have to use the api_utils.py file and pydantic types, but it can save time.
## Experimental GPU support
You need a sufficiently recent version of your SD environment. Using the GPU has a lot of little drawbacks to understand, but the performance gain is substantial.
In Version 1.2.1, the ability to use the GPU has been added, a setting that can be configured in SD at startup. Currently, this feature is only supported on Windows and Linux, as the necessary dependencies for Mac have not been included.
The `--faceswaplab_gpu` option in SD can be added to the args in webui-user.sh or webui-user.bat. **There is also an option in SD settings**.
The model stays loaded in VRAM and won't be unloaded after each use. As of now, I don't know a straightforward way to handle this, so it will occupy space continuously. If your system's VRAM is limited, enabling this option might not be advisable.
A change has also been made that could lead to some ripple effects. Previously, detection parameters such as det_size and det_thresh were automatically adjusted when a second model was loaded. This is no longer possible, so these parameters have been moved to the global settings to enable face detection.
The `auto_det_size` option emulates the old behavior. It has no difference on CPU. BUT it will load the model twice if you use GPU. That means more VRAM comsumption and twice the initial load time. If you don't want that, you can use a det_size of 320, read below.
If you enabled GPU and you are sure you avec a CUDA compatible card and the model keep using CPU provider, please checks that you have onnxruntime-gpu installed.
### SD.NEXT and GPU
Please read carefully.
Using the GPU requires the use of the onnxruntime-gpu>=1.15.0 dependency. For the moment, this conflicts with older SD.Next dependencies (tensorflow, which uses numpy and potentially rembg). You will need to check numpy>=1.24.2 and tensorflow>=2.13.0.
You should therefore be able to debug a little before activating the option. If you don't feel up to it, it's best not to use it.
The first time the swap is used, the program will continue to use the CPU, but will offer to install the GPU. You will then need to restart. This is due to the optimizations made by SD.Next to the installation scripts.
For SD.Next, the best is to install dependencies manually :
on windows :
```shell
.\venv\Scripts\activate
cd .\extensions\sd-webui-faceswaplab\
pip install .\requirements-gpu.txt
```
## Settings
You can change the program's default behavior in your webui's global settings (FaceSwapLab section in settings). This is particularly useful if you want to have default options for inpainting or for post-processsing, for example.
The interface must be restarted to take the changes into account. Sometimes you have to reboot the entire webui server.
There may be display bugs on some radio buttons that may not display the value (Codeformer might look disabled for instance). Check the logs to ensure that the transformation has been applied.
There may be display bugs on some radio buttons that may not display the value (Codeformer might look disabled for instance). Check the logs to ensure that the transformation has been applied.
### det_size and det_thresh (detection accuracy and performances)
V1.2.1 : A change has been made that could lead to some ripple effects. Previously, detection parameters such as det_size and det_thresh were automatically adjusted when a second model was loaded. This is no longer possible, so these parameters have been moved to the global settings to enable face detection.
The `auto_det_size` option emulates the old behavior. It has no difference on CPU. BUT it will load the model twice if you use GPU. That means more VRAM comsumption and twice the initial load time. If you don't want that, you can use a det_size of 320, read below.
The `det_size` parameter defines the size of the detection area, controlling the spatial resolution at which faces are detected within an image. A larger detection size might capture more facial details, enhancing accuracy but potentially impacting processing speed. Conversely, the `det_thresh` parameter represents the detection threshold, serving as a sensitivity control for face detection. A higher threshold value leads to more conservative detection, capturing only the most prominent faces, while a lower threshold might detect more faces but could also result in more false positives.
It has been observed that a det_size value of 320 is more effective at detecting large faces. If there are issues with detecting large faces, switching to this value is recommended, though it might result in a loss of some quality.

@ -2,6 +2,7 @@
layout: page
title: FAQ
permalink: /faq/
toc: true
---
Our issue tracker often contains requests that may originate from a misunderstanding of the software's functionality. We aim to address these queries; however, due to time constraints, we may not be able to respond to each request individually. This FAQ section serves as a preliminary source of information for commonly raised concerns. We recommend reviewing these before submitting an issue.
@ -71,6 +72,16 @@ The quality of results is inherently tied to the capabilities of the model and c
Consider this extension as a low-cost alternative to more sophisticated tools like Lora, or as an addition to such tools. It's important to **maintain realistic expectations of the results** provided by this extension.
#### Why is a face not detected?
Face detection might be influenced by various factors and settings, particularly the det_size and det_thresh parameters. Here's how these could affect detection:
+ Detection Size (det_size): If the detection size is set too small, it may not capture large faces adequately. A value of 320 has been found to be more effective for detecting large faces, though it might result in a loss of some quality.
+ Detection Threshold (det_thresh): If the threshold is set too high, it can make the detection more conservative, capturing only the most prominent faces. A lower threshold might detect more faces but could also result in more false positives.
If a face is not being detected, adjusting these parameters might solve the issue. Try increasing the det_size if large faces are the problem, or experiment with different det_thresh values to find the balance that works best for your specific case.
#### Issue: Incorrect Gender Detection
@ -78,11 +89,7 @@ The gender detection functionality is handled by the underlying analysis model.
#### Why isn't GPU support included?
While implementing GPU support may seem straightforward, simply requiring a modification to the onnxruntime implementation and a change in providers in the swapper, there are reasons we haven't included it as a standard option.
The primary consideration is the substantial VRAM usage of the SD models. Integrating the model on the GPU doesn't result in significant performance gains with the current state of the software. Moreover, the GPU support becomes truly beneficial when processing large numbers of frames or video. However, our experience indicates that this tends to cause more issues than it resolves.
Consequently, requests for GPU support as a standard feature will not be considered.
GPU is supported via an option see [documentation](../doc/). This is expermental, use it carefully.
#### What is the 'Upscaled Inswapper' Option in SD FaceSwapLab?

@ -8,6 +8,8 @@ permalink: /install/
The extension runs mainly on the CPU to avoid the use of VRAM. However, it is recommended to follow the specifications recommended by sd/a1111 with regard to prerequisites. At the time of writing, a version of python lower than 11 is preferable (even if it works with python 3.11, model loading and performance may fall short of expectations).
Older versions of gradio dont work well with the extension. See this bug report : https://github.com/glucauze/sd-webui-faceswaplab/issues/5. It has been tested on 3.32.0
### Windows-User : Visual Studio ! Don't neglect this !
Before beginning the installation process, if you are using Windows, you need to install this requirement:
@ -18,6 +20,12 @@ Before beginning the installation process, if you are using Windows, you need to
3. OR if you don't want to install either the full Visual Studio suite or the VS C++ Build Tools: Follow the instructions provided in section VIII of the documentation.
## SD.Next / Vladmantic
SD.Next loading optimizations in relation to extension installation scripts can sometimes cause problems. This is particularly the case if you copy the script without installing it via the interface.
If you get an error after startup, try restarting the server.
## Manual Install
To install the extension, follow the steps below:

@ -1,36 +1,62 @@
import launch
import os
import pkg_resources
import sys
import pkg_resources
from modules import shared
from packaging.version import parse
def check_install() -> None:
use_gpu = getattr(
shared.cmd_opts, "faceswaplab_gpu", False
) or shared.opts.data.get("faceswaplab_use_gpu", False)
req_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "requirements.txt")
if use_gpu and sys.platform != "darwin":
req_file = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "requirements-gpu.txt"
)
else:
req_file = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "requirements.txt"
)
print("Checking faceswaplab requirements")
with open(req_file) as file:
for package in file:
def is_installed(package: str) -> bool:
package_name = package.split("==")[0].split(">=")[0].strip()
try:
python = sys.executable
package = package.strip()
installed_version = parse(
pkg_resources.get_distribution(package_name).version
)
except pkg_resources.DistributionNotFound:
return False
if not launch.is_installed(package.split("==")[0]):
print(f"Install {package}")
launch.run_pip(
f"install {package}", f"sd-webui-faceswaplab requirement: {package}"
)
elif "==" in package:
package_name, package_version = package.split("==")
installed_version = pkg_resources.get_distribution(package_name).version
if installed_version != package_version:
print(
f"Install {package}, {installed_version} vs {package_version}"
)
if "==" in package:
required_version = parse(package.split("==")[1])
return installed_version == required_version
elif ">=" in package:
required_version = parse(package.split(">=")[1])
return installed_version >= required_version
else:
return True
print("Checking faceswaplab requirements")
with open(req_file) as file:
for package in file:
try:
package = package.strip()
if not is_installed(package):
print(f"Install {package}")
launch.run_pip(
f"install {package}",
f"sd-webui-faceswaplab requirement: changing {package_name} version from {installed_version} to {package_version}",
f"sd-webui-faceswaplab requirement: {package}",
)
except Exception as e:
print(e)
print(f"Warning: Failed to install {package}, faceswaplab will not work.")
raise e
except Exception as e:
print(e)
print(
f"Warning: Failed to install {package}, faceswaplab will not work."
)
raise e
check_install()

@ -8,3 +8,8 @@ def preload(parser: ArgumentParser) -> None:
choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"],
help="Set the log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)",
)
parser.add_argument(
"--faceswaplab_gpu",
action="store_true",
help="Enable GPU if set, disable if not set",
)

@ -0,0 +1,11 @@
cython
dill
ifnude
insightface==0.7.3
onnx>=1.14.0
opencv-python
pandas
pydantic
safetensors
onnxruntime>=1.15.0
onnxruntime-gpu>=1.15.0

@ -2,8 +2,8 @@ cython
dill
ifnude
insightface==0.7.3
onnx==1.14.0
onnxruntime==1.15.1
onnx>=1.14.0
onnxruntime>=1.15.0
opencv-python
pandas
pydantic

@ -1,24 +1,57 @@
import os
from tqdm import tqdm
import traceback
import urllib.request
from scripts.faceswaplab_utils.faceswaplab_logging import logger
from scripts.faceswaplab_swapping.swapper import is_sha1_matching
from scripts.faceswaplab_utils.models_utils import get_models
from scripts.faceswaplab_globals import *
from packaging import version
import pkg_resources
import hashlib
ALREADY_DONE = False
def check_install() -> None:
# Very ugly hack :( due to sdnext optimization not calling install.py every time if git log has not changed
import importlib.util
import sys
import os
current_dir = os.path.dirname(os.path.realpath(__file__))
check_install_path = os.path.join(current_dir, "..", "install.py")
spec = importlib.util.spec_from_file_location("check_install", check_install_path)
check_install = importlib.util.module_from_spec(spec)
sys.modules["check_install"] = check_install
spec.loader.exec_module(check_install)
check_install.check_install() # type: ignore
#### End of ugly hack :( !
def is_sha1_matching(file_path: str, expected_sha1: str) -> bool:
sha1_hash = hashlib.sha1(usedforsecurity=False)
try:
with open(file_path, "rb") as file:
for byte_block in iter(lambda: file.read(4096), b""):
sha1_hash.update(byte_block)
if sha1_hash.hexdigest() == expected_sha1:
return True
else:
return False
except Exception as e:
logger.error(
"Failed to check model hash, check the model is valid or has been downloaded adequately : %e",
e,
)
traceback.print_exc()
return False
def check_configuration() -> None:
global ALREADY_DONE
if ALREADY_DONE:
return
logger.info(f"FaceSwapLab {VERSION_FLAG} Config :")
# This has been moved here due to pb with sdnext in install.py not doing what a1111 is doing.
models_dir = MODELS_DIR
faces_dir = FACES_DIR
@ -48,6 +81,9 @@ def check_configuration() -> None:
os.makedirs(models_dir, exist_ok=True)
os.makedirs(faces_dir, exist_ok=True)
if not os.path.exists(model_path):
download(model_url, model_path)
if not is_sha1_matching(model_path, EXPECTED_INSWAPPER_SHA1):
logger.error(
"Suspicious sha1 for model %s, check the model is valid or has been downloaded adequately. Should be %s",
@ -63,17 +99,4 @@ def check_configuration() -> None:
gradio_version,
)
if not os.path.exists(model_path):
download(model_url, model_path)
def print_infos() -> None:
logger.info("FaceSwapLab config :")
logger.info("+ MODEL DIR : %s", models_dir)
models = get_models()
logger.info("+ MODELS: %s", models)
logger.info("+ FACES DIR : %s", faces_dir)
logger.info("+ ANALYZER DIR : %s", ANALYZER_DIR)
print_infos()
ALREADY_DONE = True

@ -1,8 +1,11 @@
from scripts.configure import check_configuration
check_configuration()
import importlib
import traceback
from scripts import faceswaplab_globals
from scripts.configure import check_configuration
from scripts.faceswaplab_api import faceswaplab_api
from scripts.faceswaplab_postprocessing import upscaling
from scripts.faceswaplab_settings import faceswaplab_settings
@ -12,18 +15,22 @@ from scripts.faceswaplab_utils import faceswaplab_logging, imgutils, models_util
from scripts.faceswaplab_utils.models_utils import get_current_model
from scripts.faceswaplab_utils.typing import *
from scripts.faceswaplab_utils.ui_utils import dataclasses_from_flat_list
from scripts.faceswaplab_utils.faceswaplab_logging import logger, save_img_debug
# Reload all the modules when using "apply and restart"
# This is mainly done for development purposes
importlib.reload(swapper)
importlib.reload(faceswaplab_logging)
importlib.reload(faceswaplab_globals)
importlib.reload(imgutils)
importlib.reload(upscaling)
importlib.reload(faceswaplab_settings)
importlib.reload(models_utils)
importlib.reload(faceswaplab_unit_ui)
importlib.reload(faceswaplab_api)
import logging
if logger.getEffectiveLevel() <= logging.DEBUG:
importlib.reload(swapper)
importlib.reload(faceswaplab_logging)
importlib.reload(faceswaplab_globals)
importlib.reload(imgutils)
importlib.reload(upscaling)
importlib.reload(faceswaplab_settings)
importlib.reload(models_utils)
importlib.reload(faceswaplab_unit_ui)
importlib.reload(faceswaplab_api)
import os
from pprint import pformat
@ -46,7 +53,6 @@ from scripts.faceswaplab_postprocessing.postprocessing_options import (
PostProcessingOptions,
)
from scripts.faceswaplab_ui.faceswaplab_unit_settings import FaceSwapUnitSettings
from scripts.faceswaplab_utils.faceswaplab_logging import logger, save_img_debug
EXTENSION_PATH = os.path.join("extensions", "sd-webui-faceswaplab")
@ -67,7 +73,6 @@ except:
class FaceSwapScript(scripts.Script):
def __init__(self) -> None:
super().__init__()
check_configuration()
@property
def units_count(self) -> int:

@ -10,7 +10,7 @@ REFERENCE_PATH = os.path.join(
scripts.basedir(), "extensions", "sd-webui-faceswaplab", "references"
)
VERSION_FLAG: str = "v1.2.0"
VERSION_FLAG: str = "v1.2.1"
EXTENSION_PATH = os.path.join("extensions", "sd-webui-faceswaplab")
# The NSFW score threshold. If any part of the image has a score greater than this threshold, the image will be considered NSFW.

@ -16,6 +16,16 @@ def on_ui_settings() -> None:
section=section,
),
)
shared.opts.add_option(
"faceswaplab_use_gpu",
shared.OptionInfo(
False,
"Use GPU, only for CUDA on Windows/Linux - experimental and risky, can messed up dependencies (requires restart)",
gr.Checkbox,
{"interactive": True},
section=section,
),
)
shared.opts.add_option(
"faceswaplab_keep_original",
shared.OptionInfo(
@ -37,11 +47,33 @@ def on_ui_settings() -> None:
),
)
shared.opts.add_option(
"faceswaplab_det_size",
shared.OptionInfo(
640,
"det_size : Size of the detection area for face analysis. Higher values may improve quality but reduce speed. Low value may improve detection of very large face.",
gr.Slider,
{"minimum": 320, "maximum": 640, "step": 320},
section=section,
),
)
shared.opts.add_option(
"faceswaplab_auto_det_size",
shared.OptionInfo(
True,
"Auto det_size : Will load model twice and test faces on each if needed (old behaviour). Takes more VRAM. Precedence over fixed det_size",
gr.Checkbox,
{"interactive": True},
section=section,
),
)
shared.opts.add_option(
"faceswaplab_detection_threshold",
shared.OptionInfo(
0.5,
"Face Detection threshold",
"det_thresh : Face Detection threshold",
gr.Slider,
{"minimum": 0.1, "maximum": 0.99, "step": 0.001},
section=section,

@ -3,13 +3,12 @@ import os
from dataclasses import dataclass
from pprint import pformat
import traceback
from typing import Any, Dict, Generator, List, Set, Tuple, Optional
from typing import Any, Dict, Generator, List, Set, Tuple, Optional, Union
import tempfile
from tqdm import tqdm
import sys
from io import StringIO
from contextlib import contextmanager
import hashlib
import cv2
import insightface
@ -37,8 +36,52 @@ from scripts.faceswaplab_postprocessing.postprocessing_options import (
from scripts.faceswaplab_utils.models_utils import get_current_model
from scripts.faceswaplab_utils.typing import CV2ImgU8, PILImage, Face
from scripts.faceswaplab_inpainting.i2i_pp import img2img_diffusion
from modules import shared
import onnxruntime
providers = ["CPUExecutionProvider"]
def use_gpu() -> bool:
return (
getattr(shared.cmd_opts, "faceswaplab_gpu", False)
or opts.data.get("faceswaplab_use_gpu", False)
) and sys.platform != "darwin"
@lru_cache
def force_install_gpu_providers() -> None:
# Ugly Ugly hack due to SDNEXT :
try:
from scripts.configure import check_install
logger.warning("Try to reinstall gpu dependencies")
check_install()
logger.warning("IF onnxruntime-gpu has been installed successfully, RESTART")
logger.warning(
"On SD.NEXT/vladmantic you will also need to check numpy>=1.24.2 and tensorflow>=2.13.0"
)
except:
logger.error(
"Reinstall has failed (which is normal on windows), please install requirements-gpu.txt manually to enable gpu."
)
def get_providers() -> List[str]:
providers = ["CPUExecutionProvider"]
if use_gpu():
if "CUDAExecutionProvider" in onnxruntime.get_available_providers():
providers = ["CUDAExecutionProvider"]
else:
logger.error(
"CUDAExecutionProvider not found in onnxruntime.available_providers : %s, use CPU instead. Check onnxruntime-gpu is installed.",
onnxruntime.get_available_providers(),
)
force_install_gpu_providers()
return providers
def is_cpu_provider() -> bool:
return get_providers() == ["CPUExecutionProvider"]
def cosine_similarity_face(face1: Face, face2: Face) -> float:
@ -95,7 +138,7 @@ def compare_faces(img1: PILImage, img2: PILImage) -> float:
def batch_process(
src_images: List[PILImage],
src_images: List[Union[PILImage, str]], # image or filename
save_path: Optional[str],
units: List[FaceSwapUnitSettings],
postprocess_options: PostProcessingOptions,
@ -104,7 +147,7 @@ def batch_process(
Process a batch of images, apply face swapping according to the given settings, and optionally save the resulting images to a specified path.
Args:
src_images (List[PILImage]): List of source PIL Images to process.
src_images (List[Union[PILImage, str]]): List of source PIL Images to process or list of images file names
save_path (Optional[str]): Destination path where the processed images will be saved. If None, no images are saved.
units (List[FaceSwapUnitSettings]): List of FaceSwapUnitSettings to apply to the images.
postprocess_options (PostProcessingOptions): Post-processing settings to be applied to the images.
@ -123,6 +166,18 @@ def batch_process(
if src_images is not None and len(units) > 0:
result_images = []
for src_image in src_images:
if isinstance(src_image, str):
if save_path:
path = os.path.join(
save_path, "swapped_" + os.path.basename(src_image)
)
src_image = Image.open(src_image)
elif save_path:
path = tempfile.NamedTemporaryFile(
delete=False, suffix=".png", dir=save_path
).name
assert isinstance(src_image, Image.Image)
current_images = []
swapped_images = process_images_units(
get_current_model(), images=[(src_image, None)], units=units
@ -138,9 +193,6 @@ def batch_process(
if save_path:
for img in current_images:
path = tempfile.NamedTemporaryFile(
delete=False, suffix=".png", dir=save_path
).name
img.save(path)
result_images += current_images
@ -257,8 +309,10 @@ def capture_stdout() -> Generator[StringIO, None, None]:
sys.stdout = original_stdout # Type: ignore
@lru_cache(maxsize=1)
def getAnalysisModel() -> insightface.app.FaceAnalysis:
@lru_cache(maxsize=3)
def getAnalysisModel(
det_size: Tuple[int, int] = (640, 640), det_thresh: float = 0.5
) -> insightface.app.FaceAnalysis:
"""
Retrieves the analysis model for face analysis.
@ -269,11 +323,16 @@ def getAnalysisModel() -> insightface.app.FaceAnalysis:
if not os.path.exists(faceswaplab_globals.ANALYZER_DIR):
os.makedirs(faceswaplab_globals.ANALYZER_DIR)
logger.info("Load analysis model, will take some time. (> 30s)")
providers = get_providers()
logger.info(
f"Load analysis model det_size={det_size}, det_thresh={det_thresh}, providers = {providers}, will take some time. (> 30s)"
)
# Initialize the analysis model with the specified name and providers
with tqdm(
total=1, desc="Loading analysis model (first time is slow)", unit="model"
total=1,
desc=f"Loading {det_size} analysis model (first time is slow)",
unit="model",
) as pbar:
with capture_stdout() as captured:
model = insightface.app.FaceAnalysis(
@ -281,6 +340,9 @@ def getAnalysisModel() -> insightface.app.FaceAnalysis:
providers=providers,
root=faceswaplab_globals.ANALYZER_DIR,
)
# Prepare the analysis model for face detection with the specified detection size
model.prepare(ctx_id=0, det_thresh=det_thresh, det_size=det_size)
pbar.update(1)
logger.info("%s", pformat(captured.getvalue()))
@ -292,25 +354,6 @@ def getAnalysisModel() -> insightface.app.FaceAnalysis:
raise FaceModelException("Loading of analysis model failed")
def is_sha1_matching(file_path: str, expected_sha1: str) -> bool:
sha1_hash = hashlib.sha1(usedforsecurity=False)
try:
with open(file_path, "rb") as file:
for byte_block in iter(lambda: file.read(4096), b""):
sha1_hash.update(byte_block)
if sha1_hash.hexdigest() == expected_sha1:
return True
else:
return False
except Exception as e:
logger.error(
"Failed to check model hash, check the model is valid or has been downloaded adequately : %e",
e,
)
traceback.print_exc()
return False
@lru_cache(maxsize=1)
def getFaceSwapModel(model_path: str) -> upscaled_inswapper.UpscaledINSwapper:
"""
@ -323,14 +366,7 @@ def getFaceSwapModel(model_path: str) -> upscaled_inswapper.UpscaledINSwapper:
insightface.model_zoo.FaceModel: The face swap model.
"""
try:
expected_sha1 = "17a64851eaefd55ea597ee41e5c18409754244c5"
if not is_sha1_matching(model_path, expected_sha1):
logger.error(
"Suspicious sha1 for model %s, check the model is valid or has been downloaded adequately. Should be %s",
model_path,
expected_sha1,
)
providers = get_providers()
with tqdm(total=1, desc="Loading swap model", unit="model") as pbar:
with capture_stdout() as captured:
model = upscaled_inswapper.UpscaledINSwapper(
@ -350,8 +386,8 @@ def getFaceSwapModel(model_path: str) -> upscaled_inswapper.UpscaledINSwapper:
def get_faces(
img_data: CV2ImgU8,
det_size: Tuple[int, int] = (640, 640),
det_thresh: Optional[float] = None,
det_size: Tuple[int, int] = (640, 640),
) -> List[Face]:
"""
Detects and retrieves faces from an image using an analysis model.
@ -368,24 +404,36 @@ def get_faces(
if det_thresh is None:
det_thresh = opts.data.get("faceswaplab_detection_threshold", 0.5)
# Create a deep copy of the analysis model (otherwise det_size is attached to the analysis model and can't be changed)
face_analyser = copy.deepcopy(getAnalysisModel())
auto_det_size = opts.data.get("faceswaplab_auto_det_size", True)
if not auto_det_size:
x = opts.data.get("faceswaplab_det_size", 640)
det_size = (x, x)
# Prepare the analysis model for face detection with the specified detection size
face_analyser.prepare(ctx_id=0, det_thresh=det_thresh, det_size=det_size)
face_analyser = getAnalysisModel(det_size, det_thresh)
# Get the detected faces from the image using the analysis model
face = face_analyser.get(img_data)
faces = face_analyser.get(img_data)
# If no faces are detected and the detection size is larger than 320x320,
# recursively call the function with a smaller detection size
if len(face) == 0 and det_size[0] > 320 and det_size[1] > 320:
det_size_half = (det_size[0] // 2, det_size[1] // 2)
return get_faces(img_data, det_size=det_size_half, det_thresh=det_thresh)
if len(faces) == 0:
if auto_det_size:
if det_size[0] > 320 and det_size[1] > 320:
det_size_half = (det_size[0] // 2, det_size[1] // 2)
return get_faces(
img_data, det_size=det_size_half, det_thresh=det_thresh
)
# If no faces are detected print a warning to user about change in detection
else:
if det_size[0] > 320:
logger.warning(
"No faces detected, you might want to play with det_size by reducing it (in sd global settings). Lower (320) means more detection but less precise. Or activate auto-det-size."
)
try:
# Sort the detected faces based on their x-coordinate of the bounding box
return sorted(face, key=lambda x: x.bbox[0])
return sorted(faces, key=lambda x: x.bbox[0])
except Exception as e:
logger.error("Failed to get faces %s", e)
traceback.print_exc()

@ -195,7 +195,7 @@ class UpscaledINSwapper(INSwapper):
logger.info("*" * 80)
logger.info(f"Inswapper")
if options.upscaler_name:
if options.upscaler_name and options.upscaler_name != "None":
# Upscale original image
k = 4
aimg, M = face_align.norm_crop2(
@ -262,7 +262,6 @@ class UpscaledINSwapper(INSwapper):
)
img_white[img_white > 20] = 255
fthresh = 10
print("fthresh", fthresh)
fake_diff[fake_diff < fthresh] = 0
fake_diff[fake_diff >= fthresh] = 255
img_mask = img_white

@ -17,7 +17,7 @@ def postprocessing_ui() -> List[gr.components.Component]:
choices=["None"] + [x.name() for x in shared.face_restorers],
value=lambda: opts.data.get(
"faceswaplab_pp_default_face_restorer",
"None",
shared.face_restorers[0].name(),
),
type="value",
elem_id="faceswaplab_pp_face_restorer",

@ -216,12 +216,10 @@ def batch_process(
]
postprocess_options = classes[-1]
images = [
Image.open(file.name) for file in files
] # potentially greedy but Image.open is supposed to be lazy
images_paths = [file.name for file in files]
return swapper.batch_process(
images,
images_paths,
save_path=save_path,
units=units,
postprocess_options=postprocess_options,

@ -10,7 +10,9 @@ def faceswap_unit_advanced_options(
is_img2img: bool, unit_num: int = 1, id_prefix: str = "faceswaplab_"
) -> List[gr.components.Component]:
with gr.Accordion(f"Post-Processing & Advanced Mask Options", open=False):
gr.Markdown("""Post-processing and mask settings for unit faces""")
gr.Markdown(
"""Post-processing and mask settings for unit faces. Best result : checks all, use LDSR, use Codeformer"""
)
with gr.Row():
face_restorer_name = gr.Radio(
label="Restore Face",
@ -209,6 +211,16 @@ def faceswap_unit_ui(
elem_id=f"{id_prefix}_face{unit_num}_swap_in_generated",
)
gr.Markdown(
"""
## Advanced Options
**Simple :** If you have bad results and don't want to fine-tune here, just enable Codeformer in "Global Post-Processing".
Otherwise, read the [doc](https://glucauze.github.io/sd-webui-faceswaplab/doc/) to understand following options.
"""
)
with gr.Accordion("Similarity", open=False):
gr.Markdown("""Discard images with low similarity or no faces :""")
with gr.Row():

Loading…
Cancel
Save