Version: v1.0
Input
prompt *
Specify things to see in the output
negative_prompt
Specify things to not see in the output
num_outputs
Number of output images
width
Output image width
height
Output image height
enhance_face_with_adetailer
Enhance face with adetailer
enhance_hands_with_adetailer
Enhance hands with adetailer
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
detail
Enhance/diminish detail while keeping the overall style/character
brightness
Adjust brightness
contrast
Adjust contrast
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
sampler
Sampler type
samping_steps
Number of denoising steps
cfg_scale
Scale for classifier-free guidance
clip_skip
The number of last layers of CLIP network to skip
vae
Select VAE
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
Sign in to run this model for free!
Output
https://files.tungsten.run/uploads/f552bce070f048cf872da8b8c0373f28/00000-773028352.webp
https://files.tungsten.run/uploads/8580c61ccd8b402b9a556c4ec61a8918/00001-773028353.webp
https://files.tungsten.run/uploads/36691c4241264d449ca33c9c899ad2da/00002-773028354.webp
This example was created by evevalentine2017
Finished in 45.4 seconds
Setting up the model... Preparing inputs... Processing... Loading VAE weight: models/VAE/sdxl_vae.safetensors Full prompt: Masterpiece, best quality, high res, anime style, simple bold colored lineart, noon, Utopian scifi metropolis at the beginning of the universe, <lora:EnvyLiminalXL01:1>, liminal, <lora:EnvyLineArtSliderXL01:-1>, <lora:EnvyAwesomizeXL01:1>, awesomize, <lora:TLS:0.9> Full negative prompt: bad anatomy, desaturated, poor quality, bad quality, low resolution, dark, yellow tint, western comics, abstract, nsfw, nipples, kid, child, loli 0%| | 0/8 [00:00<?, ?it/s] 12%|█▎ | 1/8 [00:04<00:33, 4.81s/it] 25%|██▌ | 2/8 [00:09<00:26, 4.48s/it] 38%|███▊ | 3/8 [00:13<00:21, 4.30s/it] 50%|█████ | 4/8 [00:17<00:17, 4.25s/it] 62%|██████▎ | 5/8 [00:21<00:12, 4.17s/it] 75%|███████▌ | 6/8 [00:25<00:08, 4.05s/it] 88%|████████▊ | 7/8 [00:28<00:03, 3.68s/it] 100%|██████████| 8/8 [00:29<00:00, 2.81s/it] 100%|██████████| 8/8 [00:29<00:00, 3.63s/it] Decoding latents in cuda:0... done in 1.72s Move latents to cpu... done in 0.02s 0: 480x640 (no detections), 168.5ms Speed: 2.7ms preprocess, 168.5ms inference, 1.1ms postprocess per image at shape (1, 3, 480, 640) [-] ADetailer: nothing detected on image 1 with 1st settings. 0: 480x640 (no detections), 7.9ms Speed: 2.3ms preprocess, 7.9ms inference, 0.7ms postprocess per image at shape (1, 3, 480, 640) [-] ADetailer: nothing detected on image 2 with 1st settings. 0: 480x640 (no detections), 9.2ms Speed: 2.4ms preprocess, 9.2ms inference, 0.8ms postprocess per image at shape (1, 3, 480, 640) [-] ADetailer: nothing detected on image 3 with 1st settings. Uploading outputs... Finished.
prompt *
Specify things to see in the output
negative_prompt
Specify things to not see in the output
num_outputs
Number of output images
width
Output image width
height
Output image height
enhance_face_with_adetailer
Enhance face with adetailer
enhance_hands_with_adetailer
Enhance hands with adetailer
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
detail
Enhance/diminish detail while keeping the overall style/character
brightness
Adjust brightness
contrast
Adjust contrast
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
sampler
Sampler type
samping_steps
Number of denoising steps
cfg_scale
Scale for classifier-free guidance
clip_skip
The number of last layers of CLIP network to skip
vae
Select VAE
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
Sign in to run this model for free!
https://files.tungsten.run/uploads/f552bce070f048cf872da8b8c0373f28/00000-773028352.webp
https://files.tungsten.run/uploads/8580c61ccd8b402b9a556c4ec61a8918/00001-773028353.webp
https://files.tungsten.run/uploads/36691c4241264d449ca33c9c899ad2da/00002-773028354.webp
This example was created by evevalentine2017
Finished in 45.4 seconds
Setting up the model... Preparing inputs... Processing... Loading VAE weight: models/VAE/sdxl_vae.safetensors Full prompt: Masterpiece, best quality, high res, anime style, simple bold colored lineart, noon, Utopian scifi metropolis at the beginning of the universe, <lora:EnvyLiminalXL01:1>, liminal, <lora:EnvyLineArtSliderXL01:-1>, <lora:EnvyAwesomizeXL01:1>, awesomize, <lora:TLS:0.9> Full negative prompt: bad anatomy, desaturated, poor quality, bad quality, low resolution, dark, yellow tint, western comics, abstract, nsfw, nipples, kid, child, loli 0%| | 0/8 [00:00<?, ?it/s] 12%|█▎ | 1/8 [00:04<00:33, 4.81s/it] 25%|██▌ | 2/8 [00:09<00:26, 4.48s/it] 38%|███▊ | 3/8 [00:13<00:21, 4.30s/it] 50%|█████ | 4/8 [00:17<00:17, 4.25s/it] 62%|██████▎ | 5/8 [00:21<00:12, 4.17s/it] 75%|███████▌ | 6/8 [00:25<00:08, 4.05s/it] 88%|████████▊ | 7/8 [00:28<00:03, 3.68s/it] 100%|██████████| 8/8 [00:29<00:00, 2.81s/it] 100%|██████████| 8/8 [00:29<00:00, 3.63s/it] Decoding latents in cuda:0... done in 1.72s Move latents to cpu... done in 0.02s 0: 480x640 (no detections), 168.5ms Speed: 2.7ms preprocess, 168.5ms inference, 1.1ms postprocess per image at shape (1, 3, 480, 640) [-] ADetailer: nothing detected on image 1 with 1st settings. 0: 480x640 (no detections), 7.9ms Speed: 2.3ms preprocess, 7.9ms inference, 0.7ms postprocess per image at shape (1, 3, 480, 640) [-] ADetailer: nothing detected on image 2 with 1st settings. 0: 480x640 (no detections), 9.2ms Speed: 2.4ms preprocess, 9.2ms inference, 0.8ms postprocess per image at shape (1, 3, 480, 640) [-] ADetailer: nothing detected on image 3 with 1st settings. Uploading outputs... Finished.