• Sign up or login, and you'll have full access to opportunities of forum.

Discussion about A.I.

Go to CruxDreams.com
ok. Thank you
I suspect it could have more to do with the lack of NSFW images in the training dataset than any active attempt at censorship in the generation process. There’s a possibility that they applied a filter to remove lewd keywords from the prompt but proprietary models are known to be trained on heavily censored datasets anyway, which would make them ignorant of how naked girls should look like.

I believe both of the images would be quite achievable with inpainting and using a Lora in Stable Diffusion. But even it would struggle to depict them without a good checkpoint since the base SD model wasn’t trained with explicit images either.

In short, you must use SD if you want to make any kind of NSFW content.
ah...ahhh:grazy:
1706500973271.png
 
amazing how AI manages to censor
View attachment 1423818
Well doing an AI rework of the original piece would be a lot of effort. Even with a model that absolutely does allow nudity & lewdity, it will struggle with the relation of objects to each other as it just doesn't form an internal spatial representaion. You'll be struggling with weird fleshy masses where tongue is supposed to meet glans etc, etc etc etc Also there is no way to do this with a single prompt.

The bigger question though is why

the task would be "add the 'photorealism' of AI while maintaining the general composition of the image"?

Even if this succeeds after an enormous amount of work to prevent/remove deformations and teratogenic hallucinations,

.... it isn't clear imho that the result would be amore effective or impressive piece.
Composition and relation of the elements to each other is great as it is, the decreasing level of detail in the background figures makes the entire piece more effective, and "detailing" it in the sense of "oh now we can see the texture of the skin on various parts of anatomy" would probably just be a distraction...

edit, posted this before seeing that one
A low-effort work to convert the left image into a larger, and a bit more detailed illustration:
... detail can be added but the effort is always going to be in preventig those oddities, see the tongue...
 
... detail can be added but the effort is always going to be in preventig those oddities, see the tongue...

To be fair, I didn't spend much time making that image, but I agree with what you said in general.

When I work on my AI renders, I usually spend more time fixing such deformities than trying to make them suit my taste.

It's something I'd hope to see improve as technology advances.

Even though I know what kind of things work or don't regarding such matters better than average AI users here, it has little to do with artistic skills.

By the way, the tongue would be trivial to fix in that image. However, the male anatomy (especially the testicles) would've been very challenging if I had tried. The reason is that the man's body is only partially included in the frame, and fewer images depict that particular body part in the training set compared to those dealing with female anatomy.
 
Couple of questions from someone still struggling to get her head around this AI stuff...

Does Stable Diffusion run only in the cloud or can it be run offline?

Regarding training the software, does it require scanning (possibly censored) online data or can it use local content to train itself in a similar way to the old FakeApp software? (bearing in mind that most of us in here will already have a very large library of adult content on our hard drives...

Can SD be used to improve the realism of Daz3D rendered images or is it just good for creating "original" pictures?
 
Couple of questions from someone still struggling to get her head around this AI stuff...

Does Stable Diffusion run only in the cloud or can it be run offline?

Regarding training the software, does it require scanning (possibly censored) online data or can it use local content to train itself in a similar way to the old FakeApp software? (bearing in mind that most of us in here will already have a very large library of adult content on our hard drives...

Can SD be used to improve the realism of Daz3D rendered images or is it just good for creating "original" pictures?

You can run it locally, but you'll need a graphics card (preferably from NVidia) that has more than 8Gb (12 or more is recommended) of VRAM.

As for training, it's unnecessary unless you want some concept for which there are no existing models (e.g. nailed hands, although I know someone from CF is working on it).

And SD can be great for either of the cases. I often use my Blender render as a basis for my AI works, and Daz3D is no different in this regard.

Also, @Hornet1ba has posted many AI works based on his Daz3D renders.


Edit: A bit more words on training - The reason I said it's unnecessary was because you mentioned training on general adult content. Many great models (i.e., checkpoints and Loras) trained over carefully picked adult images already exist, so there's little point in doing it yourself unless you want something very specific and unusual.

One potential reason why one would want to train a custom model would be to depict a specific person, like some celebrity or even themselves. In that case, an easier path would be to use something called IPAdapter, which doesn't require training. But if you need to train a custom model (Lora, in this case), you'll typically need 20 or so 1024p+ images that show the concept.

The training program can be run locally but most choose to do it on a cloud since it requires even more VRAM than running SD itself. Fortunately, there are a few free options available, including Civitai and Google Colab.
 
Last edited:
However, the male anatomy (especially the testicles) would've been very challenging if I had tried. The reason is that the man's body is only partially included in the frame, and fewer images depict that particular body part in the training set compared to those dealing with female anatomy.
I see, there's always a trend to the tumorous with that... but it's kinda cute how it made up an orange headband for the girl in the background...1706457737893-2.jpg
 
What I would really like is for some AI program to be able to make a somewhat realistic Daz3D model based on actual photographic reference.

For example, while I'm actualy very happy with my Daz3D model of myself (see my avatar), while it's pretty close, it's still not perfect, and with one or two notable exceptions, most of my attempts at creating Daz3D versions of real people have been quite disappointing at best and absolutely unusable at worst.

I've tried using various Daz tools such as Headshop to do this, the results have been poor and a long way short of what I need :(

If there was a way to take (for example) the Daz Genesis 8 Female model and getting AI to accurately map real photographic reference material onto it, then we'd be getting somewhere...
 
What I would really like is for some AI program to be able to make a somewhat realistic Daz3D model based on actual photographic reference.

For example, while I'm actualy very happy with my Daz3D model of myself (see my avatar), while it's pretty close, it's still not perfect, and with one or two notable exceptions, most of my attempts at creating Daz3D versions of real people have been quite disappointing at best and absolutely unusable at worst.

I've tried using various Daz tools such as Headshop to do this, the results have been poor and a long way short of what I need :(

If there was a way to take (for example) the Daz Genesis 8 Female model and getting AI to accurately map real photographic reference material onto it, then we'd be getting somewhere...

It's something I've been thinking of since I still do 3D work occasionally. I believe what you mentioned would be quite feasible, and I've even seen quite a few such prototypes on the internet already. So, it's highly likely we'll see a generative AI that can create a usable 3D model + textures from a photo soon if it's not available already.

But it's uncertain when or if it'd be available to the general public at an affordable price. It'd be nice if Daz3D could make it as a plugin, but probably they're too small a company to do something like that.

However, you can still make photorealistic AI images resemble yourself from a Daz model and a few photos with what we have now. It's not difficult to convert a Daz render (provided the composition is simple) into a photo-like image using control nets, and there are ways to make it look like you if you have a few photos of yourself, preferably with matching angles.

If you feel interested in details, search IPAdapter + FaceID and you'll see a few videos and guides that detail the process.
 
It's something I've been thinking of since I still do 3D work occasionally. I believe what you mentioned would be quite feasible, and I've even seen quite a few such prototypes on the internet already. So, it's highly likely we'll see a generative AI that can create a usable 3D model + textures from a photo soon if it's not available already.

But it's uncertain when or if it'd be available to the general public at an affordable price. It'd be nice if Daz3D could make it as a plugin, but probably they're too small a company to do something like that.

However, you can still make photorealistic AI images resemble yourself from a Daz model and a few photos with what we have now. It's not difficult to convert a Daz render (provided the composition is simple) into a photo-like image using control nets, and there are ways to make it look like you if you have a few photos of yourself, preferably with matching angles.

If you feel interested in details, search IPAdapter + FaceID and you'll see a few videos and guides that detail the process.
Thank you. As I said earlier, I'm really struggling to get to grips with this AI stuff, and yes what I;m looking for is a way to maybe use it to create realistic character models for Daz3D based on actual photographic reference (I have thousands of photos of myself from my modeling career that I could use to make a more accurate cgi model of myself for example - though the model I currently have is already very close, but it would be a lot of fun to take images of (for example) one or more pornstars and render myself into an image with them, or loads of other fun stuff :)


Also, my own attempts at making celebrity-based Daz models have (with one or two noable examples) been way off the mark and I'm always looking for ways to improve on these :)
 
A fantastic work! I like it :)
thanks I have been playing with the Krita plugin on your suggestion...
tbh I think to make a thorough reconstruction here would be hours of work.
Putting this side by side with the original - even in its extremely low resolution as provied further up in the thread by @settantuno

1706457737893-3.jpg

- shows that a number of features of the original drawing have actually been lost
while SD has made some thing up that weren't there.
also of course as usual some wacky anatomy.

There's the typical 'pseudorealism' as in for instance, at first glance the fall of cloth looks 'kinda realistic' but on closer inspection not really...
 
Last edited:
thanks I have been playing with the Krita plugin on your suggestion...
tbh I think to make a thorough reconstruction here would be hours of work.
Putting this side by side with the original - even in its extremely low resolution as provied further up in the thread by @settantuno

View attachment 1424182

- shows that a number of features of the original drawing have actually been lost
while SD has made some thing up that weren't there.
also of course as usual some wacky anatomy.

There's the typical 'pseudorealism' as in for instance, at first glance the fall of cloth looks 'kinda realistic' but on closer inspection not really...
Yeah, that's quite a tricky scene to convert if you aim to preserve as many details as possible from the original instead of recreating them.

SD struggles to depict a human body when a significant part of it is obstructed or out of the frame, as I mentioned above. In this case, it failed to depict the naked man's body correctly, so the chest area was blended with the man in a purple robe, which again caused a problem with depth perception. A similar problem can be observed in how the red-haired girl's hair mingled with the red clothes of the man on the right side.

If I must work on this conversion, I'd first outpaint the image to draw the hidden body parts of the naked man so that the AI can redraw it with a better context, then crop the output when I'm done. The other issues could also be fixed using similar tricks, but it'll certainly take quite an effort, as you said above.

Little problems like those are the reason why I usually spend many hours, or even days, to finish each render I make. It's undeniable that working with SD involves a lot of challenges, even though it can yield great outcomes when it works.

I can only hope it will keep evolving at a rapid pace so I can focus more on expressing what I imagine with it than on learning tricks and workarounds to deal with such problems.
 
Little problems like those are the reason why I usually spend many hours, or even days, to finish each render I make. It's undeniable that working with SD involves a lot of challenges, even though it can yield great outcomes when it works.
Sine I've been playing around with the Krita/SD setup a bit I tried this as a bit of a learning challenge ;)
And it really looks like one would have to completely deconstruct and reconstruct the scene to get an outcome that's at least faithful to the basic expression.
I'd first outpaint the image to draw the hidden body parts of the naked man
How exactly does 'outpainting' work in principle?
I'm aware if for instance I have a figure that's cut off at the knees, and/or the top of the head is cropped, etc. I can enlarge the canvas, fill it with an appropriate background color taken from neighboring regions, and paint a crude approximation of the mssing parts in Krita, then running SD over it will sometimes generate a decent, near seamless outcome. Is there a more methodic approach to this?
 
Sine I've been playing around with the Krita/SD setup a bit I tried this as a bit of a learning challenge ;)
And it really looks like one would have to completely deconstruct and reconstruct the scene to get an outcome that's at least faithful to the basic expression.

How exactly does 'outpainting' work in principle?
I'm aware if for instance I have a figure that's cut off at the knees, and/or the top of the head is cropped, etc. I can enlarge the canvas, fill it with an appropriate background color taken from neighboring regions, and paint a crude approximation of the mssing parts in Krita, then running SD over it will sometimes generate a decent, near seamless outcome. Is there a more methodic approach to this?
In my case, I deliberately try to drop everything except for the composition when I start from an image. The reason is, the base images I use are just crude drawings or Blender renders made for that purpose so they don’t contain much details to preserve.

And even when I occasionally work from a finished Daz render, usually from a request, I usually start by generating a rough image, using the original only as a control net input and add back the details later. The reason is, the AI will try to match the style of the input image when it does an img2img or inpaint process, which means it will mimic the 3D rendering style instead of generating a photorealistic output.

As for outpainting, I usually just draw the missing part in a control net image and make a high denoise / control net strength pass. I don’t know if there’s a better way of doing it in Krita since I don’t rely on outpainting often, but that’s the method I find to be the easiest.
 
In my case, I deliberately try to drop everything except for the composition when I start from an image. The reason is, the base images I use are just crude drawings or Blender renders made for that purpose so they don’t contain much details to preserve.

And even when I occasionally work from a finished Daz render, usually from a request, I usually start by generating a rough image, using the original only as a control net input and add back the details later. The reason is, the AI will try to match the style of the input image when it does an img2img or inpaint process, which means it will mimic the 3D rendering style instead of generating a photorealistic output.

As for outpainting, I usually just draw the missing part in a control net image and make a high denoise / control net strength pass. I don’t know if there’s a better way of doing it in Krita since I don’t rely on outpainting often, but that’s the method I find to be the easiest.
I've been testing SD for a couple of weeks now, at first glance everything looks fine, but when you zoom in you can see where SD (1.5 or sdxl) is lacking in detail/transitions.
I'm sure the AI will get better, but at the moment I'm much more precise with Blender. I'm trying to integrate some AI into my workflow, there's an interesting approach in Blender.


This is promising :)
 
I've been testing SD for a couple of weeks now, at first glance everything looks fine, but when you zoom in you can see where SD (1.5 or sdxl) is lacking in detail/transitions.
I'm sure the AI will get better, but at the moment I'm much more precise with Blender. I'm trying to integrate some AI into my workflow, there's an interesting approach in Blender.


This is promising :)
For details, it’s recommended to upscale the image using one of the popular workflows, instead of generating it in a large resolution from the beginning.

The optimal resolution is 1024p for SDXL, and 512p for SD 1.5, which is often insufficient to contain all the details you need. In my case, I usually work with 1024p base images which I run through a tiled/iterative upscaler in ComfyUI and inpaint the output for final touches.

And yes, I am also glad that I no longer need to keep my Substance subscription since now I can easily generate seamlessly PBR textures. :)
 
For details, it’s recommended to upscale the image using one of the popular workflows, instead of generating it in a large resolution from the beginning.

The optimal resolution is 1024p for SDXL, and 512p for SD 1.5, which is often insufficient to contain all the details you need. In my case, I usually work with 1024p base images which I run through a tiled/iterative upscaler in ComfyUI and inpaint the output for final touches.

And yes, I am also glad that I no longer need to keep my Substance subscription since now I can easily generate seamlessly PBR textures. :)
Seems we‘re using nearly the same params, however I‘m using 768p for 1.5, mostly epicRealism, I have access to an unlimited OpenArt.ai sub.
 
Seems we‘re using nearly the same params, however I‘m using 768p for 1.5, mostly epicRealism, I have access to an unlimited OpenArt.ai sub.
Yeah, it’s probably the best photorealistic model available at the moment, but 768p won’t be enough for many cases if you want details.

I don’t know what kind of features openart.ai provides, but you’ll probably want to setup a second stage to increase the resolution and adding details. In A1111, it’s usually done by “hires fix”, often combined with a tiled upscaler. See if you can setup a similar workflow with the site.

It’s very difficult to get a good image in sufficient details without an iterative or multi-stage workflow, especially if you’re using SD 1.5 whose native resolution is 512p.

EDIT: I briefly checked out openart.ai and it looks like they’re providing an easy way to run ComfyUI workflows. From what I have seen, it seems that you can build or upload a custom workflow too. It means you can make a good workflow with an upscaling stage, if it doesn’t already provide one.

I don’t know if it also supports custom nodes but if it does, maybe I should try it myself for those workflows I can’t run locally on my RTX3080.
 
Last edited:
Back
Top Bottom