Nyghtfall3D
Executioner
VoD's new gallery structure is live! Here's to a long and successful run!
I have some ideas what would I DO!
No, but it can be used to sway public opinion on political issues through the use of fake images and videos.This AI can't think, and will never think.
Well it's like another industrial revolution. Whether you like it or not... That's life, nothing is granted... In former ind. revs a huge number of people lost their job. Did it become a better world by machines taking the jobs from people? I don't think so. But still this is rhe way of life.This AI we're talking about here is not "General AI", which is what some people refer to as an existential threat - stuff like HAL 9000 or SkyNet.
This AI can't think, and will never think. It's simply a badly named plagiarizing app.
It needs an ENORMOUS amount of real human data before it can do its thing, and this data was all scraped from the internet without asking, because if they had to ask they wouldn't have gotten enough to make it work. For images this dataset is called LAION5B and is used in all the AIs, even the ones that claim to be ethical and not using it (it's still used as a base, then other smaller datasets are added on top).
AI is making creatives jobless en masse right now, it's been happening in several industries since last year. Writers, photographers, illustrators, game artists, voice actors, translators, I wonder who's next, maybe musicians. It's not just a matter of "hey it's just like when they invented the camera, just keep scrolling and ignore it if you don't like it".
While I may not entirely agree with - or may not be knowledgeable of the subject, to be honest - those claim we have already reached the level of Artificial General Intelligence, we might be much closer to this goal than you seem to think we are. Publicly available AIs nowadays already outperform most law school graduates in bar exam, for instance, and they even show rudimentary “self awareness” when you take a screenshot of your browser while chatting with them and ask them to describe the image.This AI we're talking about here is not "General AI", which is what some people refer to as an existential threat - stuff like HAL 9000 or SkyNet.
This AI can't think, and will never think. It's simply a badly named plagiarizing app.
It needs an ENORMOUS amount of real human data before it can do its thing, and this data was all scraped from the internet without asking, because if they had to ask they wouldn't have gotten enough to make it work. For images this dataset is called LAION5B and is used in all the AIs, even the ones that claim to be ethical and not using it (it's still used as a base, then other smaller datasets are added on top).
AI is making creatives jobless en masse right now, it's been happening in several industries since last year. Writers, photographers, illustrators, game artists, voice actors, translators, I wonder who's next, maybe musicians. It's not just a matter of "hey it's just like when they invented the camera, just keep scrolling and ignore it if you don't like it".
MetaHuman is certainly an impressive technology. But it's mainly for rendering real-time characters specifically targeted for Unreal Engine (the last time I checked, the license even prohibited usage outside the platform).I'm not talking about Daz Studio. Do I have the necessary skills? Not 100%, not 90%, not 80%, but I'm working on improving my skills whenever my free time allows.
About the quality and the costs. Have you ever heard of MetaHuman Animator? There was a demo last year. Pictures speak louder than words, so take a look at the two videos.
1) The Hellblade2 Demo - fyi - this is metahumans + a proper face rig + mocap data from a real actor. Not a human being.
2) The conference: This was my "whow" effect last year.
This technique is free for all, as said in the video. You need an Unreal / Epic account to have access. My ue dev account there is pre 2018.
3) I'm mostly a Blender guy, so I tried to find out how to do it there. My focus is about photoreality, so there was no way around keentools. (https://keentools.io), facetools in combination with geotracker + sculpting + rigging. AI is nice, tried to integrate it into my workflow too, rendered thousands of images the last months, at the moment there are way too many glitches for quality results if we talk about Sd1.5/SDXL.
I don't want to pollute your thread with tech info, so I will show my results soon at sn, au and here at crux in my thread.
I know. It is not the time to show half baked results. I have a picture at AU, made of a tutorial, this was two years ago.(Moved from a different thread.)
MetaHuman is certainly an impressive technology. But it's mainly for rendering real-time characters specifically targeted for Unreal Engine (the last time I checked, the license even prohibited usage outside the platform).
I'm afraid you're overestimating the technical - not artistic - skills of the average creators who contribute kinky art content to communities like CF. There's a reason why Daz3D/Poser became the de facto king of 3D creation tools.
Most people just purchase ready-made 3D assets and don't care much about the material or renderer settings before they hit the render button. Sure, installing Unreal Engine and rendering a simple MetaHuman character might not be that difficult. However, since MetaHuman was made for a different target audience, replacing Daz3D with Unreal Engine would be quite another matter.
You won't easily find all the necessary assets outside of the Daz3D Marketplace/Renderosity. While importing them to Unreal is possible, they won't be as useful without all the morph options and presets.
More importantly, even if you manage to do all that, MetaHuman won't look as good as what you can easily render with Stable Diffusion, if the criteria is photorealism.
Like you, I'm mainly a Blender person and have strived to find a way to render photorealistic human characters with it. My conclusion was that it's definitely possible, but you need to be highly proficient at things like sculpting, shaders, and so on.
When I said "highly proficient," I meant being so at a professional level. I followed a few photorealistic 3D human projects until a few years back. Even for people who know the ins and outs of Blender like Blitter, it usually takes tremendous time and effort to create a truly photorealistic outcome, which you can achieve with Stable Diffusion without too much trouble nowadays.
By the way, there was a somewhat outdated public project to create a photorealistic human model called "The Wikihuman Project" (a.k.a. Digital Emily). Although the model only depicts a single character's head, it uses quite a complex shader setup and nearly a gigabyte worth of texture maps.
All these things are way out of the league for most kink art creators when only a very few of them know how to create even a simple 3D model like a cross, not to mention photorealistic human characters.
I know you're very skilled at Blender and probably can create something as good as the Hellblade demo if you use MetaHuman. But if someone can produce a 3D render that matches the quality of an AA game promo video, they must be far better than having "a little skill" as you said.
It's true that using Stable Diffusion XL can be quite frustrating at times. But if one is determined to spend a fraction of time in learning the necessary skills that would be needed to create a photorealistic 3D character, they would be able to produce better quality (i.e. close to photorealism) renders than what even the most skilled professional Unreal Engine artists can achieve.
TLDR: You can achieve photorealism in Blender and something very close in Unreal Engine. But it'll require far more time and effort than it would to learn how to use Stable Diffusion properly.
If you analyze sd1.5 or sdxl 2 versus a photo in photoshop or affinity photo, what do you notice first?(Moved from a different thread.)
MetaHuman is certainly an impressive technology. But it's mainly for rendering real-time characters specifically targeted for Unreal Engine (the last time I checked, the license even prohibited usage outside the platform).
I'm afraid you're overestimating the technical - not artistic - skills of the average creators who contribute kinky art content to communities like CF. There's a reason why Daz3D/Poser became the de facto king of 3D creation tools.
Most people just purchase ready-made 3D assets and don't care much about the material or renderer settings before they hit the render button. Sure, installing Unreal Engine and rendering a simple MetaHuman character might not be that difficult. However, since MetaHuman was made for a different target audience, replacing Daz3D with Unreal Engine would be quite another matter.
You won't easily find all the necessary assets outside of the Daz3D Marketplace/Renderosity. While importing them to Unreal is possible, they won't be as useful without all the morph options and presets.
More importantly, even if you manage to do all that, MetaHuman won't look as good as what you can easily render with Stable Diffusion, if the criteria is photorealism.
Like you, I'm mainly a Blender person and have strived to find a way to render photorealistic human characters with it. My conclusion was that it's definitely possible, but you need to be highly proficient at things like sculpting, shaders, and so on.
When I said "highly proficient," I meant being so at a professional level. I followed a few photorealistic 3D human projects until a few years back. Even for people who know the ins and outs of Blender like Blitter, it usually takes tremendous time and effort to create a truly photorealistic outcome, which you can achieve with Stable Diffusion without too much trouble nowadays.
By the way, there was a somewhat outdated public project to create a photorealistic human model called "The Wikihuman Project" (a.k.a. Digital Emily). Although the model only depicts a single character's head, it uses quite a complex shader setup and nearly a gigabyte worth of texture maps.
All these things are way out of the league for most kink art creators when only a very few of them know how to create even a simple 3D model like a cross, not to mention photorealistic human characters.
I know you're very skilled at Blender and probably can create something as good as the Hellblade demo if you use MetaHuman. But if someone can produce a 3D render that matches the quality of an AA game promo video, they must be far better than having "a little skill" as you said.
It's true that using Stable Diffusion XL can be quite frustrating at times. But if one is determined to spend a fraction of time in learning the necessary skills that would be needed to create a photorealistic 3D character, they would be able to produce better quality (i.e. close to photorealism) renders than what even the most skilled professional Unreal Engine artists can achieve.
TLDR: You can achieve photorealism in Blender and something very close in Unreal Engine. But it'll require far more time and effort than it would to learn how to use Stable Diffusion properly.
It's an interesting question, and I think we must clarify what we mean by "SD1.5/XL image" and "(3D) render image" first to avoid confusion.If you analyze sd1.5 or sdxl 2 versus a photo in photoshop or affinity photo, what do you notice first?
And what are the differences to a render image?
I am aware about all that. But I should have been more precise. If you compare an Ai pic and a photo (let‘s forget about the limb problem for a moment), you will mostly have degradations in the transitionzone from clothes to the skin. Often they‘re „shifted“, however sometimes I got extremely realistic results. with img2img, control.net and pose references.It's an interesting question, and I think we must clarify what we mean by "SD1.5/XL image" and "(3D) render image" first to avoid confusion.
As for SDXL (let's forget about SD 1.5 for now since they are not much different in this regard), I'll assume we are talking of an unmodified output you can expect from a good photorealism model generated using a single prompt.
As for 3D rendering, I'll assume a good-quality image created with Daz3D because that's what would be most relevant to the conversation in this context.
I didn't choose something like a complex ComfyUI workflow or high-quality Blender render because we'll be talking about their respective flaws compared to real photographs.
As I mentioned above, I know that it is possible to produce a result that's practically impossible to tell from real photos using either of the methods, provided you have infinite time and top-level skills. So, I'll talk of more practical cases first.
First off, both of them suffer from the lack of details but in different ways.
SDXL struggles to depict a subject with sufficient anatomic or surface details, especially when it's rendered in a small area. But if you tell it to generate an extreme closeup, it will give you a remarkably detailed image, although it'll still lack definitions in smaller areas.
In comparison, Daz3D cannot render such minute details as micro hairs or creases on human skin, regardless of the render resolution. However, the biggest issue of Daz3D renders is that they don't represent how light affects the subject like a human body in a realistic manner.
As I mentioned in an earlier post, PBR is always an approximation of how lights interact with physical surfaces, and what Daz3D provides is a quite simplified implementation of the concept. Even with a far more advanced renderer as the Cycles engine from Blender, human characters often look realistic only in very specific lighting conditions.
The problem with SDXL can be overcome with things like inpainting and/or upscaling, which will produce results that are indistinguishable from real photos.
The problem with 3D rendering, however, cannot be fixed at a Daz3D level. To mitigate it, you'll need a bunch of highly detailed photo-scanned textures including some unusual ones like scatter maps, high-res meshes with sculpted details, and custom shader setups. And even if you have money/time/skills to meet all these requirements, usually things like hair or eyebrows give away that it's not a real photo.
That's probably why constructing a scene in a 3D modeller and then using AI to fill out the details is such a popular approach. An AI upscaler, for example, can make both 3D renders and AI-generated images indistinguishable from real photos.
Again, all this talk is only relevant when we confine ourselves to the problem of making a fictional image that looks exactly like a photo. In a more practical context, however, Blender or even Daz3D is good enough to produce stunningly "realistic" images, if we don't require such a demanding standard.
I think an example of what you mentioned as "shifting" would help me understand what you mean by that.I am aware about all that. But I should have been more precise. If you compare an Ai pic and a photo (let‘s forget about the limb problem for a moment), you will mostly have degradations in the transitionzone from clothes to the skin. Often they‘re „shifted“, however sometimes I got extremely realistic results. with img2img, control.net and pose references.
So Ai does obviously a better job than any render engine. The question is what is missing, what explains the difference. How can you get as close as possible to real photos… mostly it is a mix being not too precise, light and shadow, and some tricks.
Maybe I will post one or two pictures in q3 this year, as a proof of concept of a new method to gain more realism.
There are some ways to create alpha plane people, dull posed pic with ultra lo quality but 4k, directly from daz, that's the best quality I get at the moment.On a side note, by the way, the most photorealistic SDXL model I've seen so far is one called "Boring Reality":
(Image linked from an external host. You can see more examples from the link above.)
It's an experimental model, so their examples show the usual AI problems like deformed limbs or texts. But if I focus on the lighting and tone alone, I don't think I've seen many models that can produce more natural results.
I like Blender's new hair system. It has become much easier to style the hair now. I think Daz3D creators should seriously consider a hybrid approach using Blender, if just for the hair and the Cycles engine.The main problem with render images is still the hair. They are getting better, especially with Medusa and some custom hairnodes.
Yeah, it's been possible to create photorealistic 3D renders of humans for quite some time. Otherwise, we wouldn't have so many CGI-enhanced films nowadays."Ed" by Chris in 2014, far away from any AI, Lightwave (I was a modo guy these days haha)
You‘re right.I like Blender's new hair system. It has become much easier to style the hair now. I think Daz3D creators should seriously consider a hybrid approach using Blender, if just for the hair and the Cycles engine.
Yeah, it's been possible to create photorealistic 3D renders of humans for quite some time. Otherwise, we wouldn't have so many CGI-enhanced films nowadays.
But such results are quite far removed from what you said could be achieved with "a little skill", which has been my point.
I think that'd depend on specific requirements - not every image has to look exactly like a photograph.But what‘s up with Blender and Ai? Is it necessary? Below a scene from a 2021 tutorial, which was also a milestone in gaining knowledge for me: Tut available at BM, CgCookie.com. Scenes rendered on my laptop. Scene by Kent Trammell. 100% made in Blender.
View attachment 1461527View attachment 1461528View attachment 1461529
—
And: I haven‘t shown anything I made from scratch/default cube the last two years. Maybe this fall.