Fish Eye Footage with Hello World SDXL
Input
prompt
Specify things to see in the output
CCTV footage of a 18 age asian woman look up at forest, Polaroid filterd fish-eye lens, <lora:SDXL Detail:1>
negative_prompt
Specify things to not see in the output
(worst quality, low resolution, bad hands, open mouth), distorted, twisted, watermark, looking at the viewer
num_outputs
Number of output images
3
width
Output image width
1024
height
Output image height
1024
enhance_face_with_adetailer
Enhance face with adetailer
true
enhance_hands_with_adetailer
Enhance hands with adetailer
true
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
0.45
detail
Enhance/diminish detail while keeping the overall style/character
0
brightness
Adjust brightness
0
contrast
Adjust contrast
0
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
1344045162
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
0.55
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
1
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
1
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
1
sampler
Sampler type
Restart
samping_steps
Number of denoising steps
30
cfg_scale
Scale for classifier-free guidance
4
clip_skip
The number of last layers of CLIP network to skip
2
vae
Select VAE
sdxl_vae.safetensors
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
false
Output
https://files.tungsten.run/uploads/f5997db6e95c4cc8bdfac36f39a64a1d/00000-1344045162.webp
https://files.tungsten.run/uploads/7c776c347fe0447890ed8c3898b2db4b/00001-1344045163.webp
https://files.tungsten.run/uploads/e8e6b211055d4f5b9281a1aaaf447c8d/00002-1344045164.webp
Finished in 135.8 seconds
Setting up the model... Preparing inputs... Processing... Loading VAE weight: models/VAE/sdxl_vae.safetensors Full prompt: CCTV footage of a 18 age asian woman look up at forest, Polaroid filterd fish-eye lens, <lora:SDXL Detail:1> Full negative prompt: (worst quality, low resolution, bad hands, open mouth), distorted, twisted, watermark, looking at the viewer 0%| | 0/30 [00:00<?, ?it/s] 3%|▎ | 1/30 [00:03<01:53, 3.90s/it] 7%|▋ | 2/30 [00:06<01:29, 3.18s/it] 10%|█ | 3/30 [00:09<01:19, 2.95s/it] 13%|█▎ | 4/30 [00:11<01:14, 2.86s/it] 17%|█▋ | 5/30 [00:14<01:10, 2.81s/it] 20%|██ | 6/30 [00:17<01:06, 2.78s/it] 23%|██▎ | 7/30 [00:20<01:03, 2.77s/it] 27%|██▋ | 8/30 [00:22<01:00, 2.76s/it] 30%|███ | 9/30 [00:25<00:58, 2.76s/it] 33%|███▎ | 10/30 [00:28<00:55, 2.77s/it] 37%|███▋ | 11/30 [00:31<00:52, 2.76s/it] 40%|████ | 12/30 [00:33<00:49, 2.76s/it] 43%|████▎ | 13/30 [00:36<00:47, 2.77s/it] 47%|████▋ | 14/30 [00:39<00:44, 2.76s/it] 50%|█████ | 15/30 [00:42<00:41, 2.76s/it] 53%|█████▎ | 16/30 [00:44<00:38, 2.75s/it] 57%|█████▋ | 17/30 [00:47<00:35, 2.75s/it] 60%|██████ | 18/30 [00:50<00:32, 2.75s/it] 63%|██████▎ | 19/30 [00:53<00:30, 2.74s/it] 67%|██████▋ | 20/30 [00:55<00:27, 2.73s/it] 70%|███████ | 21/30 [00:58<00:24, 2.73s/it] 73%|███████▎ | 22/30 [01:01<00:21, 2.72s/it] 77%|███████▋ | 23/30 [01:04<00:19, 2.72s/it] 80%|████████ | 24/30 [01:06<00:16, 2.71s/it] 83%|████████▎ | 25/30 [01:09<00:13, 2.71s/it] 87%|████████▋ | 26/30 [01:12<00:10, 2.71s/it] 90%|█████████ | 27/30 [01:14<00:08, 2.70s/it] 93%|█████████▎| 28/30 [01:17<00:05, 2.70s/it] 97%|█████████▋| 29/30 [01:20<00:02, 2.70s/it] 100%|██████████| 30/30 [01:21<00:00, 2.30s/it] 100%|██████████| 30/30 [01:21<00:00, 2.72s/it] Decoding latents in cuda:0... done in 2.38s Move latents to cpu... done in 0.03s 0: 640x640 1 face, 7.8ms Speed: 3.4ms preprocess, 7.8ms inference, 29.0ms postprocess per image at shape (1, 3, 640, 640) 0%| | 0/14 [00:00<?, ?it/s] 7%|▋ | 1/14 [00:00<00:11, 1.15it/s] 14%|█▍ | 2/14 [00:01<00:10, 1.20it/s] 21%|██▏ | 3/14 [00:02<00:09, 1.18it/s] 29%|██▊ | 4/14 [00:03<00:08, 1.20it/s] 36%|███▌ | 5/14 [00:04<00:07, 1.21it/s] 43%|████▎ | 6/14 [00:04<00:06, 1.22it/s] 50%|█████ | 7/14 [00:05<00:05, 1.23it/s] 57%|█████▋ | 8/14 [00:06<00:04, 1.23it/s] 64%|██████▍ | 9/14 [00:07<00:04, 1.23it/s] 71%|███████▏ | 10/14 [00:08<00:03, 1.22it/s] 79%|███████▊ | 11/14 [00:09<00:02, 1.22it/s] 86%|████████▌ | 12/14 [00:09<00:01, 1.22it/s] 93%|█████████▎| 13/14 [00:10<00:00, 1.22it/s] 100%|██████████| 14/14 [00:11<00:00, 1.44it/s] 100%|██████████| 14/14 [00:11<00:00, 1.26it/s] Decoding latents in cuda:0... done in 0.8s Move latents to cpu... done in 0.0s 0: 640x640 (no detections), 7.4ms Speed: 3.1ms preprocess, 7.4ms inference, 0.8ms postprocess per image at shape (1, 3, 640, 640) [-] ADetailer: nothing detected on image 1 with 2nd settings. 0: 640x640 1 face, 7.5ms Speed: 3.0ms preprocess, 7.5ms inference, 1.4ms postprocess per image at shape (1, 3, 640, 640) 0%| | 0/14 [00:00<?, ?it/s] 7%|▋ | 1/14 [00:00<00:10, 1.29it/s] 14%|█▍ | 2/14 [00:01<00:09, 1.24it/s] 21%|██▏ | 3/14 [00:02<00:08, 1.22it/s] 29%|██▊ | 4/14 [00:03<00:08, 1.22it/s] 36%|███▌ | 5/14 [00:04<00:07, 1.22it/s] 43%|████▎ | 6/14 [00:04<00:06, 1.22it/s] 50%|█████ | 7/14 [00:05<00:05, 1.22it/s] 57%|█████▋ | 8/14 [00:06<00:04, 1.21it/s] 64%|██████▍ | 9/14 [00:07<00:04, 1.21it/s] 71%|███████▏ | 10/14 [00:08<00:03, 1.21it/s] 79%|███████▊ | 11/14 [00:09<00:02, 1.20it/s] 86%|████████▌ | 12/14 [00:09<00:01, 1.21it/s] 93%|█████████▎| 13/14 [00:10<00:00, 1.21it/s] 100%|██████████| 14/14 [00:11<00:00, 1.42it/s] 100%|██████████| 14/14 [00:11<00:00, 1.26it/s] Decoding latents in cuda:0... done in 0.81s Move latents to cpu... done in 0.0s 0: 640x640 (no detections), 7.3ms Speed: 3.0ms preprocess, 7.3ms inference, 0.8ms postprocess per image at shape (1, 3, 640, 640) [-] ADetailer: nothing detected on image 2 with 2nd settings. 0: 640x640 1 face, 7.4ms Speed: 2.9ms preprocess, 7.4ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 640) 0%| | 0/14 [00:00<?, ?it/s] 7%|▋ | 1/14 [00:00<00:10, 1.28it/s] 14%|█▍ | 2/14 [00:01<00:09, 1.23it/s] 21%|██▏ | 3/14 [00:02<00:09, 1.22it/s] 29%|██▊ | 4/14 [00:03<00:08, 1.21it/s] 36%|███▌ | 5/14 [00:04<00:07, 1.22it/s] 43%|████▎ | 6/14 [00:04<00:06, 1.22it/s] 50%|█████ | 7/14 [00:05<00:05, 1.21it/s] 57%|█████▋ | 8/14 [00:06<00:04, 1.21it/s] 64%|██████▍ | 9/14 [00:07<00:04, 1.21it/s] 71%|███████▏ | 10/14 [00:08<00:03, 1.21it/s] 79%|███████▊ | 11/14 [00:09<00:02, 1.21it/s] 86%|████████▌ | 12/14 [00:09<00:01, 1.21it/s] 93%|█████████▎| 13/14 [00:10<00:00, 1.21it/s] 100%|██████████| 14/14 [00:11<00:00, 1.43it/s] 100%|██████████| 14/14 [00:11<00:00, 1.26it/s] Decoding latents in cuda:0... done in 0.8s Move latents to cpu... done in 0.0s 0: 640x640 (no detections), 7.3ms Speed: 3.3ms preprocess, 7.3ms inference, 0.7ms postprocess per image at shape (1, 3, 640, 640) [-] ADetailer: nothing detected on image 3 with 2nd settings. Uploading outputs... Finished.
prompt
Specify things to see in the output
CCTV footage of a 18 age asian woman look up at forest, Polaroid filterd fish-eye lens, <lora:SDXL Detail:1>
negative_prompt
Specify things to not see in the output
(worst quality, low resolution, bad hands, open mouth), distorted, twisted, watermark, looking at the viewer
num_outputs
Number of output images
3
width
Output image width
1024
height
Output image height
1024
enhance_face_with_adetailer
Enhance face with adetailer
true
enhance_hands_with_adetailer
Enhance hands with adetailer
true
adetailer_denoising_strength
1: completely redraw face or hands / 0: no effect on output images
0.45
detail
Enhance/diminish detail while keeping the overall style/character
0
brightness
Adjust brightness
0
contrast
Adjust contrast
0
seed
Same seed with the same prompt generates the same image. Set as -1 to randomize output.
1344045162
input_image
Base image that the output should be generated from. This is useful when you want to add some detail to input_image. For example, if prompt is "sunglasses" and input_image has a man, there is the man wearing sunglasses in the output.
input_image_redrawing_strength
How differ the output is from input_image. Used only when input_image is given.
0.55
reference_image
Image with which the output should share identity (e.g. face of a person or type of a dog)
reference_image_strength
Strength of applying reference_image. Used only when reference_image is given.
1
reference_pose_image
Image with a reference pose
reference_pose_strength
Strength of applying reference_pose_image. Used only when reference_pose_image is given.
1
reference_depth_image
Image with a reference depth
reference_depth_strength
Strength of applying reference_depth_image. Used only when reference_depth_image is given.
1
sampler
Sampler type
Restart
samping_steps
Number of denoising steps
30
cfg_scale
Scale for classifier-free guidance
4
clip_skip
The number of last layers of CLIP network to skip
2
vae
Select VAE
sdxl_vae.safetensors
lora_1
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_2
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
lora_3
LoRA file. Apply by writing the following in prompt: <lora:FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE>
embedding_1
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_2
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
embedding_3
Embedding file (textural inversion). Apply by writing the following in prompt or negative prompt: (FILE_NAME_WITHOUT_EXTENSION:MAGNITUDE)
disable_prompt_modification
Disable automatically adding suggested prompt modification. Built-in LoRAs and trigger words will remain.
false
https://files.tungsten.run/uploads/f5997db6e95c4cc8bdfac36f39a64a1d/00000-1344045162.webp
https://files.tungsten.run/uploads/7c776c347fe0447890ed8c3898b2db4b/00001-1344045163.webp
https://files.tungsten.run/uploads/e8e6b211055d4f5b9281a1aaaf447c8d/00002-1344045164.webp
Finished in 135.8 seconds
Setting up the model... Preparing inputs... Processing... Loading VAE weight: models/VAE/sdxl_vae.safetensors Full prompt: CCTV footage of a 18 age asian woman look up at forest, Polaroid filterd fish-eye lens, <lora:SDXL Detail:1> Full negative prompt: (worst quality, low resolution, bad hands, open mouth), distorted, twisted, watermark, looking at the viewer 0%| | 0/30 [00:00<?, ?it/s] 3%|▎ | 1/30 [00:03<01:53, 3.90s/it] 7%|▋ | 2/30 [00:06<01:29, 3.18s/it] 10%|█ | 3/30 [00:09<01:19, 2.95s/it] 13%|█▎ | 4/30 [00:11<01:14, 2.86s/it] 17%|█▋ | 5/30 [00:14<01:10, 2.81s/it] 20%|██ | 6/30 [00:17<01:06, 2.78s/it] 23%|██▎ | 7/30 [00:20<01:03, 2.77s/it] 27%|██▋ | 8/30 [00:22<01:00, 2.76s/it] 30%|███ | 9/30 [00:25<00:58, 2.76s/it] 33%|███▎ | 10/30 [00:28<00:55, 2.77s/it] 37%|███▋ | 11/30 [00:31<00:52, 2.76s/it] 40%|████ | 12/30 [00:33<00:49, 2.76s/it] 43%|████▎ | 13/30 [00:36<00:47, 2.77s/it] 47%|████▋ | 14/30 [00:39<00:44, 2.76s/it] 50%|█████ | 15/30 [00:42<00:41, 2.76s/it] 53%|█████▎ | 16/30 [00:44<00:38, 2.75s/it] 57%|█████▋ | 17/30 [00:47<00:35, 2.75s/it] 60%|██████ | 18/30 [00:50<00:32, 2.75s/it] 63%|██████▎ | 19/30 [00:53<00:30, 2.74s/it] 67%|██████▋ | 20/30 [00:55<00:27, 2.73s/it] 70%|███████ | 21/30 [00:58<00:24, 2.73s/it] 73%|███████▎ | 22/30 [01:01<00:21, 2.72s/it] 77%|███████▋ | 23/30 [01:04<00:19, 2.72s/it] 80%|████████ | 24/30 [01:06<00:16, 2.71s/it] 83%|████████▎ | 25/30 [01:09<00:13, 2.71s/it] 87%|████████▋ | 26/30 [01:12<00:10, 2.71s/it] 90%|█████████ | 27/30 [01:14<00:08, 2.70s/it] 93%|█████████▎| 28/30 [01:17<00:05, 2.70s/it] 97%|█████████▋| 29/30 [01:20<00:02, 2.70s/it] 100%|██████████| 30/30 [01:21<00:00, 2.30s/it] 100%|██████████| 30/30 [01:21<00:00, 2.72s/it] Decoding latents in cuda:0... done in 2.38s Move latents to cpu... done in 0.03s 0: 640x640 1 face, 7.8ms Speed: 3.4ms preprocess, 7.8ms inference, 29.0ms postprocess per image at shape (1, 3, 640, 640) 0%| | 0/14 [00:00<?, ?it/s] 7%|▋ | 1/14 [00:00<00:11, 1.15it/s] 14%|█▍ | 2/14 [00:01<00:10, 1.20it/s] 21%|██▏ | 3/14 [00:02<00:09, 1.18it/s] 29%|██▊ | 4/14 [00:03<00:08, 1.20it/s] 36%|███▌ | 5/14 [00:04<00:07, 1.21it/s] 43%|████▎ | 6/14 [00:04<00:06, 1.22it/s] 50%|█████ | 7/14 [00:05<00:05, 1.23it/s] 57%|█████▋ | 8/14 [00:06<00:04, 1.23it/s] 64%|██████▍ | 9/14 [00:07<00:04, 1.23it/s] 71%|███████▏ | 10/14 [00:08<00:03, 1.22it/s] 79%|███████▊ | 11/14 [00:09<00:02, 1.22it/s] 86%|████████▌ | 12/14 [00:09<00:01, 1.22it/s] 93%|█████████▎| 13/14 [00:10<00:00, 1.22it/s] 100%|██████████| 14/14 [00:11<00:00, 1.44it/s] 100%|██████████| 14/14 [00:11<00:00, 1.26it/s] Decoding latents in cuda:0... done in 0.8s Move latents to cpu... done in 0.0s 0: 640x640 (no detections), 7.4ms Speed: 3.1ms preprocess, 7.4ms inference, 0.8ms postprocess per image at shape (1, 3, 640, 640) [-] ADetailer: nothing detected on image 1 with 2nd settings. 0: 640x640 1 face, 7.5ms Speed: 3.0ms preprocess, 7.5ms inference, 1.4ms postprocess per image at shape (1, 3, 640, 640) 0%| | 0/14 [00:00<?, ?it/s] 7%|▋ | 1/14 [00:00<00:10, 1.29it/s] 14%|█▍ | 2/14 [00:01<00:09, 1.24it/s] 21%|██▏ | 3/14 [00:02<00:08, 1.22it/s] 29%|██▊ | 4/14 [00:03<00:08, 1.22it/s] 36%|███▌ | 5/14 [00:04<00:07, 1.22it/s] 43%|████▎ | 6/14 [00:04<00:06, 1.22it/s] 50%|█████ | 7/14 [00:05<00:05, 1.22it/s] 57%|█████▋ | 8/14 [00:06<00:04, 1.21it/s] 64%|██████▍ | 9/14 [00:07<00:04, 1.21it/s] 71%|███████▏ | 10/14 [00:08<00:03, 1.21it/s] 79%|███████▊ | 11/14 [00:09<00:02, 1.20it/s] 86%|████████▌ | 12/14 [00:09<00:01, 1.21it/s] 93%|█████████▎| 13/14 [00:10<00:00, 1.21it/s] 100%|██████████| 14/14 [00:11<00:00, 1.42it/s] 100%|██████████| 14/14 [00:11<00:00, 1.26it/s] Decoding latents in cuda:0... done in 0.81s Move latents to cpu... done in 0.0s 0: 640x640 (no detections), 7.3ms Speed: 3.0ms preprocess, 7.3ms inference, 0.8ms postprocess per image at shape (1, 3, 640, 640) [-] ADetailer: nothing detected on image 2 with 2nd settings. 0: 640x640 1 face, 7.4ms Speed: 2.9ms preprocess, 7.4ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 640) 0%| | 0/14 [00:00<?, ?it/s] 7%|▋ | 1/14 [00:00<00:10, 1.28it/s] 14%|█▍ | 2/14 [00:01<00:09, 1.23it/s] 21%|██▏ | 3/14 [00:02<00:09, 1.22it/s] 29%|██▊ | 4/14 [00:03<00:08, 1.21it/s] 36%|███▌ | 5/14 [00:04<00:07, 1.22it/s] 43%|████▎ | 6/14 [00:04<00:06, 1.22it/s] 50%|█████ | 7/14 [00:05<00:05, 1.21it/s] 57%|█████▋ | 8/14 [00:06<00:04, 1.21it/s] 64%|██████▍ | 9/14 [00:07<00:04, 1.21it/s] 71%|███████▏ | 10/14 [00:08<00:03, 1.21it/s] 79%|███████▊ | 11/14 [00:09<00:02, 1.21it/s] 86%|████████▌ | 12/14 [00:09<00:01, 1.21it/s] 93%|█████████▎| 13/14 [00:10<00:00, 1.21it/s] 100%|██████████| 14/14 [00:11<00:00, 1.43it/s] 100%|██████████| 14/14 [00:11<00:00, 1.26it/s] Decoding latents in cuda:0... done in 0.8s Move latents to cpu... done in 0.0s 0: 640x640 (no detections), 7.3ms Speed: 3.3ms preprocess, 7.3ms inference, 0.7ms postprocess per image at shape (1, 3, 640, 640) [-] ADetailer: nothing detected on image 3 with 2nd settings. Uploading outputs... Finished.