Version: v1.0
Input
prompt *
Specify things to see in the output
negative_prompt
Specify things to not see in the output
num_outputs
Number of output images
width
Output image width
height
Output image height
enhance_face_with_adetailer
Enhance face with adetailer
enhance_hands_with_adetailer
Enhance hands with adetailer
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
detail
Enhance/diminish detail while keeping the overall style/character
brightness
Adjust brightness
contrast
Adjust contrast
saturation
Adjust saturation
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
sampler
Sampler type
samping_steps
Number of denoising steps
cfg_scale
Scale for classifier-free guidance
clip_skip
The number of last layers of CLIP network to skip
vae
Select VAE
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
Sign in to run this model for free!
Output
https://files.tungsten.run/uploads/9b1057ed9a28491080c8651252c2eeec/00000-4114980665.webp
https://files.tungsten.run/uploads/6d0f74a3557c4e3ba8113448611a2673/00001-4114980666.webp
https://files.tungsten.run/uploads/34a572f2d0b04e62a09ae12ef690d83f/00002-4114980667.webp
https://files.tungsten.run/uploads/cf4152da9ef74a709f0b89b090912621/00003-4114980668.webp
This example was created by evevalentine2017
Finished in 104.3 seconds
Setting up the model... Preparing inputs... Processing... Full prompt: old gold, (Marble sculpture), greek empress, toga, moody and Psychedelic ibex, Gopnik and Kidcore, luminescent, stunning, 50mm, Full negative prompt: nsfw, logo, text, ng_deepnegative_v1_75t, rev2-badprompt, verybadimagenegative_v1.3, mutated hands and fingers, poorly drawn face, extra limb, missing limb, disconnected limbs, malformed hands, ugly, 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:01<00:24, 1.28s/it] 10%|█ | 2/20 [00:03<00:29, 1.65s/it] 15%|█▌ | 3/20 [00:05<00:29, 1.74s/it] 20%|██ | 4/20 [00:06<00:29, 1.82s/it] 25%|██▌ | 5/20 [00:08<00:27, 1.83s/it] 30%|███ | 6/20 [00:10<00:25, 1.84s/it] 35%|███▌ | 7/20 [00:12<00:23, 1.83s/it] 40%|████ | 8/20 [00:14<00:21, 1.77s/it] 45%|████▌ | 9/20 [00:15<00:19, 1.76s/it] 50%|█████ | 10/20 [00:17<00:17, 1.79s/it] 55%|█████▌ | 11/20 [00:19<00:15, 1.76s/it] 60%|██████ | 12/20 [00:21<00:13, 1.73s/it] 65%|██████▌ | 13/20 [00:22<00:12, 1.73s/it] 70%|███████ | 14/20 [00:24<00:10, 1.70s/it] 75%|███████▌ | 15/20 [00:26<00:08, 1.68s/it] 80%|████████ | 16/20 [00:27<00:06, 1.64s/it] 85%|████████▌ | 17/20 [00:29<00:04, 1.63s/it] 90%|█████████ | 18/20 [00:30<00:03, 1.62s/it] 95%|█████████▌| 19/20 [00:31<00:01, 1.42s/it] 100%|██████████| 20/20 [00:32<00:00, 1.18s/it] 100%|██████████| 20/20 [00:32<00:00, 1.62s/it] Decoding latents in cuda:0... done in 0.96s Move latents to cpu... done in 0.02s 0: 640x640 1 face, 7.6ms Speed: 3.8ms preprocess, 7.6ms inference, 24.6ms postprocess per image at shape (1, 3, 640, 640)
prompt *
Specify things to see in the output
negative_prompt
Specify things to not see in the output
num_outputs
Number of output images
width
Output image width
height
Output image height
enhance_face_with_adetailer
Enhance face with adetailer
enhance_hands_with_adetailer
Enhance hands with adetailer
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
detail
Enhance/diminish detail while keeping the overall style/character
brightness
Adjust brightness
contrast
Adjust contrast
saturation
Adjust saturation
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
sampler
Sampler type
samping_steps
Number of denoising steps
cfg_scale
Scale for classifier-free guidance
clip_skip
The number of last layers of CLIP network to skip
vae
Select VAE
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
Sign in to run this model for free!
https://files.tungsten.run/uploads/9b1057ed9a28491080c8651252c2eeec/00000-4114980665.webp
https://files.tungsten.run/uploads/6d0f74a3557c4e3ba8113448611a2673/00001-4114980666.webp
https://files.tungsten.run/uploads/34a572f2d0b04e62a09ae12ef690d83f/00002-4114980667.webp
https://files.tungsten.run/uploads/cf4152da9ef74a709f0b89b090912621/00003-4114980668.webp
This example was created by evevalentine2017
Finished in 104.3 seconds
Setting up the model... Preparing inputs... Processing... Full prompt: old gold, (Marble sculpture), greek empress, toga, moody and Psychedelic ibex, Gopnik and Kidcore, luminescent, stunning, 50mm, Full negative prompt: nsfw, logo, text, ng_deepnegative_v1_75t, rev2-badprompt, verybadimagenegative_v1.3, mutated hands and fingers, poorly drawn face, extra limb, missing limb, disconnected limbs, malformed hands, ugly, 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:01<00:24, 1.28s/it] 10%|█ | 2/20 [00:03<00:29, 1.65s/it] 15%|█▌ | 3/20 [00:05<00:29, 1.74s/it] 20%|██ | 4/20 [00:06<00:29, 1.82s/it] 25%|██▌ | 5/20 [00:08<00:27, 1.83s/it] 30%|███ | 6/20 [00:10<00:25, 1.84s/it] 35%|███▌ | 7/20 [00:12<00:23, 1.83s/it] 40%|████ | 8/20 [00:14<00:21, 1.77s/it] 45%|████▌ | 9/20 [00:15<00:19, 1.76s/it] 50%|█████ | 10/20 [00:17<00:17, 1.79s/it] 55%|█████▌ | 11/20 [00:19<00:15, 1.76s/it] 60%|██████ | 12/20 [00:21<00:13, 1.73s/it] 65%|██████▌ | 13/20 [00:22<00:12, 1.73s/it] 70%|███████ | 14/20 [00:24<00:10, 1.70s/it] 75%|███████▌ | 15/20 [00:26<00:08, 1.68s/it] 80%|████████ | 16/20 [00:27<00:06, 1.64s/it] 85%|████████▌ | 17/20 [00:29<00:04, 1.63s/it] 90%|█████████ | 18/20 [00:30<00:03, 1.62s/it] 95%|█████████▌| 19/20 [00:31<00:01, 1.42s/it] 100%|██████████| 20/20 [00:32<00:00, 1.18s/it] 100%|██████████| 20/20 [00:32<00:00, 1.62s/it] Decoding latents in cuda:0... done in 0.96s Move latents to cpu... done in 0.02s 0: 640x640 1 face, 7.6ms Speed: 3.8ms preprocess, 7.6ms inference, 24.6ms postprocess per image at shape (1, 3, 640, 640)