Version: v1.0
Input
prompt *
Specify things to see in the output
negative_prompt
Specify things to not see in the output
num_outputs
Number of output images
width
Output image width
height
Output image height
enhance_face_with_adetailer
Enhance face with adetailer
enhance_hands_with_adetailer
Enhance hands with adetailer
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
detail
Enhance/diminish detail while keeping the overall style/character
brightness
Adjust brightness
contrast
Adjust contrast
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
sampler
Sampler type
samping_steps
Number of denoising steps
cfg_scale
Scale for classifier-free guidance
clip_skip
The number of last layers of CLIP network to skip
vae
Select VAE
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
Sign in to run this model for free!
Output
https://files.tungsten.run/uploads/ce329514c3ff462fb3f860d4d770caaf/00000-2451895994.webp
https://files.tungsten.run/uploads/8990e3e006124dc4943aa429986f4e67/00001-2451895995.webp
https://files.tungsten.run/uploads/86734ca5073f4d52bddde8671e1b3c31/00002-2451895996.webp
This example was created by evevalentine2017
Finished in 39.4 seconds
Setting up the model... Preparing inputs... Processing... Loading VAE weight: models/VAE/sdxl_vae.safetensors Full prompt: Anime, plain colors, flat colors, celshade, best quality, HUNK from Resident Evil, holding weapon, gun, dark city background, horror theme Full negative prompt: negativeXL_D, realistic, real photo, textured skin, realism, 3d, volumetric, looking at the viewer, nipples, nsfw, (worst quality, low quality:1.0), easynegative, (extra fingers, malformed hands, polydactyly:1.5), blurry, watermark, text 0%| | 0/10 [00:00<?, ?it/s] 10%|█ | 1/10 [00:01<00:14, 1.56s/it] 20%|██ | 2/10 [00:03<00:14, 1.80s/it] 30%|███ | 3/10 [00:05<00:13, 1.91s/it] 40%|████ | 4/10 [00:07<00:11, 1.94s/it] 50%|█████ | 5/10 [00:09<00:09, 1.95s/it] 60%|██████ | 6/10 [00:11<00:07, 1.94s/it] 70%|███████ | 7/10 [00:13<00:05, 1.89s/it] 80%|████████ | 8/10 [00:14<00:03, 1.81s/it] 90%|█████████ | 9/10 [00:16<00:01, 1.64s/it] 100%|██████████| 10/10 [00:17<00:00, 1.42s/it] 100%|██████████| 10/10 [00:17<00:00, 1.71s/it] Decoding latents in cuda:0... done in 1.74s Move latents to cpu... done in 0.02s 0: 640x480 1 face, 158.3ms Speed: 2.5ms preprocess, 158.3ms inference, 22.4ms postprocess per image at shape (1, 3, 640, 480) 0%| | 0/5 [00:00<?, ?it/s] 20%|██ | 1/5 [00:00<00:03, 1.25it/s] 40%|████ | 2/5 [00:01<00:02, 1.42it/s] 60%|██████ | 3/5 [00:02<00:01, 1.55it/s] 80%|████████ | 4/5 [00:02<00:00, 1.77it/s] 100%|██████████| 5/5 [00:02<00:00, 2.09it/s] 100%|██████████| 5/5 [00:02<00:00, 1.80it/s] Decoding latents in cuda:0... done in 0.57s Move latents to cpu... done in 0.01s 0: 640x480 1 face, 8.0ms Speed: 2.3ms preprocess, 8.0ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 480) 0%| | 0/5 [00:00<?, ?it/s] 20%|██ | 1/5 [00:00<00:02, 1.48it/s] 40%|████ | 2/5 [00:01<00:01, 1.55it/s] 60%|██████ | 3/5 [00:01<00:01, 1.64it/s] 80%|████████ | 4/5 [00:02<00:00, 1.84it/s] 100%|██████████| 5/5 [00:02<00:00, 2.16it/s] 100%|██████████| 5/5 [00:02<00:00, 1.90it/s] Decoding latents in cuda:0... done in 0.57s Move latents to cpu... done in 0.0s 0: 640x480 1 face, 7.7ms Speed: 2.3ms preprocess, 7.7ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 480) 0%| | 0/5 [00:00<?, ?it/s] 20%|██ | 1/5 [00:00<00:02, 1.39it/s] 40%|████ | 2/5 [00:01<00:01, 1.50it/s] 60%|██████ | 3/5 [00:01<00:01, 1.60it/s] 80%|████████ | 4/5 [00:02<00:00, 1.80it/s] 100%|██████████| 5/5 [00:02<00:00, 2.12it/s] 100%|██████████| 5/5 [00:02<00:00, 1.85it/s] Decoding latents in cuda:0... done in 0.57s Move latents to cpu... done in 0.0s Uploading outputs... Finished.
prompt *
Specify things to see in the output
negative_prompt
Specify things to not see in the output
num_outputs
Number of output images
width
Output image width
height
Output image height
enhance_face_with_adetailer
Enhance face with adetailer
enhance_hands_with_adetailer
Enhance hands with adetailer
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
detail
Enhance/diminish detail while keeping the overall style/character
brightness
Adjust brightness
contrast
Adjust contrast
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
sampler
Sampler type
samping_steps
Number of denoising steps
cfg_scale
Scale for classifier-free guidance
clip_skip
The number of last layers of CLIP network to skip
vae
Select VAE
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
Sign in to run this model for free!
https://files.tungsten.run/uploads/ce329514c3ff462fb3f860d4d770caaf/00000-2451895994.webp
https://files.tungsten.run/uploads/8990e3e006124dc4943aa429986f4e67/00001-2451895995.webp
https://files.tungsten.run/uploads/86734ca5073f4d52bddde8671e1b3c31/00002-2451895996.webp
This example was created by evevalentine2017
Finished in 39.4 seconds
Setting up the model... Preparing inputs... Processing... Loading VAE weight: models/VAE/sdxl_vae.safetensors Full prompt: Anime, plain colors, flat colors, celshade, best quality, HUNK from Resident Evil, holding weapon, gun, dark city background, horror theme Full negative prompt: negativeXL_D, realistic, real photo, textured skin, realism, 3d, volumetric, looking at the viewer, nipples, nsfw, (worst quality, low quality:1.0), easynegative, (extra fingers, malformed hands, polydactyly:1.5), blurry, watermark, text 0%| | 0/10 [00:00<?, ?it/s] 10%|█ | 1/10 [00:01<00:14, 1.56s/it] 20%|██ | 2/10 [00:03<00:14, 1.80s/it] 30%|███ | 3/10 [00:05<00:13, 1.91s/it] 40%|████ | 4/10 [00:07<00:11, 1.94s/it] 50%|█████ | 5/10 [00:09<00:09, 1.95s/it] 60%|██████ | 6/10 [00:11<00:07, 1.94s/it] 70%|███████ | 7/10 [00:13<00:05, 1.89s/it] 80%|████████ | 8/10 [00:14<00:03, 1.81s/it] 90%|█████████ | 9/10 [00:16<00:01, 1.64s/it] 100%|██████████| 10/10 [00:17<00:00, 1.42s/it] 100%|██████████| 10/10 [00:17<00:00, 1.71s/it] Decoding latents in cuda:0... done in 1.74s Move latents to cpu... done in 0.02s 0: 640x480 1 face, 158.3ms Speed: 2.5ms preprocess, 158.3ms inference, 22.4ms postprocess per image at shape (1, 3, 640, 480) 0%| | 0/5 [00:00<?, ?it/s] 20%|██ | 1/5 [00:00<00:03, 1.25it/s] 40%|████ | 2/5 [00:01<00:02, 1.42it/s] 60%|██████ | 3/5 [00:02<00:01, 1.55it/s] 80%|████████ | 4/5 [00:02<00:00, 1.77it/s] 100%|██████████| 5/5 [00:02<00:00, 2.09it/s] 100%|██████████| 5/5 [00:02<00:00, 1.80it/s] Decoding latents in cuda:0... done in 0.57s Move latents to cpu... done in 0.01s 0: 640x480 1 face, 8.0ms Speed: 2.3ms preprocess, 8.0ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 480) 0%| | 0/5 [00:00<?, ?it/s] 20%|██ | 1/5 [00:00<00:02, 1.48it/s] 40%|████ | 2/5 [00:01<00:01, 1.55it/s] 60%|██████ | 3/5 [00:01<00:01, 1.64it/s] 80%|████████ | 4/5 [00:02<00:00, 1.84it/s] 100%|██████████| 5/5 [00:02<00:00, 2.16it/s] 100%|██████████| 5/5 [00:02<00:00, 1.90it/s] Decoding latents in cuda:0... done in 0.57s Move latents to cpu... done in 0.0s 0: 640x480 1 face, 7.7ms Speed: 2.3ms preprocess, 7.7ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 480) 0%| | 0/5 [00:00<?, ?it/s] 20%|██ | 1/5 [00:00<00:02, 1.39it/s] 40%|████ | 2/5 [00:01<00:01, 1.50it/s] 60%|██████ | 3/5 [00:01<00:01, 1.60it/s] 80%|████████ | 4/5 [00:02<00:00, 1.80it/s] 100%|██████████| 5/5 [00:02<00:00, 2.12it/s] 100%|██████████| 5/5 [00:02<00:00, 1.85it/s] Decoding latents in cuda:0... done in 0.57s Move latents to cpu... done in 0.0s Uploading outputs... Finished.