• Sign up or login, and you'll have full access to opportunities of forum.

Discussion about A.I.

Go to CruxDreams.com
Personally, I recommend to use a hub / proxy service like OpenRouter instead for this reason. It hosts many uncensored open-source LLMs that don’t need jailbreaking AND also provides access to ChatGPT-4o.

Also, some of those uncensored models are fine-tuned specifically for erotic storywriting and roleplaying, which may give you a comparable (if not always better) experience than what ChatGPT provides on its good days.
That is interesting.

Which LLM would you suggest for storywriting (I'm not much into roleplaying)?

And which one would you suggest for porn and snuff pics generation?

Thank you.
 
That is interesting.

Which LLM would you suggest for storywriting (I'm not much into roleplaying)?

And which one would you suggest for porn and snuff pics generation?

Thank you.
It's difficult to say when new and better models are popping up literally every week. However, there are a few ways to help you find the best model for your needs. There is a weekly discussion thread on r/SillyTavern for that purpose. Also, OpenRouter allows you to search models using tags and popularity which can be useful too.

From what I've tried, I had a good experience with the Mistral Nemo 12B series (e.g. Starcannon and Celeste). I like how "small" they are despite the quality of their output. Rocinante might be another 12B variant that could suit your purpose based on the brief tests I've done with it.

As to larger models, I heard the Magnum 72B series is particularly good at writing stories, but I haven't tried it yet.

It's pretty easy to try different models on their site, so it would probably be best to try a few and decide for yourself.

Just remember that output quality can vary greatly depending on the parameters, especially 'temperature' and 'min_p' (when supported, otherwise 'top_p').

Hope this helps. I'm curious how well they would be for crux stories. :)
 
Last edited:
And which one would you suggest for porn and snuff pics generation?
Oh, and as for images, I'm currently using a Flux 1D fine-tune called "STOIQO," as mentioned above. But if your main interest is snuff, a Pony variant might work better until the community catches up with Flux Loras's handling of such themes.
 
I believe that STOIQO New Reality and Afrodite are the same base model, it's just the SFW / NSFW aspect that differs. Also, New Reality appears to be one version ahead (F.1D Alpha 2) with Afrodite still on F.1D Alpha 1.

I will give SD3.5 a spin when I have the time, just need to download a checkpoint and install it! I run everything local, so I don't have to worry about storage space :)
After actually running the server to check the model list, I realised that I was using the "New Reality" model after all. :D Sorry for the confusion again!

I have a bit more meaningful information to share than that, by the way:

Until now, I could not load a Unet-only model (the 11Gb/fp16 version from STOIQO) in Krita. I assumed the support was missing, so I opted for the alternate one, which ships with the text encoder and clip. If I'm not mistaken, it's also a bit inferior in quality than the fp16 version.

I informed the developer about the issue, and they just patched the code, so it's working as expected now. If you update the `comfyui-tooling` custom node, it should be able to load the better version of STOIQO without a problem.
 
After actually running the server to check the model list, I realised that I was using the "New Reality" model after all. :D Sorry for the confusion again!

I have a bit more meaningful information to share than that, by the way:

Until now, I could not load a Unet-only model (the 11Gb/fp16 version from STOIQO) in Krita. I assumed the support was missing, so I opted for the alternate one, which ships with the text encoder and clip. If I'm not mistaken, it's also a bit inferior in quality than the fp16 version.

I informed the developer about the issue, and they just patched the code, so it's working as expected now. If you update the `comfyui-tooling` custom node, it should be able to load the better version of STOIQO without a problem.

That's interesting, makes me wonder what the difference really is between STOIQO New Reality and STOIQO Afrodite...

I'm using the fp16 pruned model of Afrodite (20.34 Gb download), it works fine in Krita AI as is, no node update needed.
 
That's interesting, makes me wonder what the difference really is between STOIQO New Reality and STOIQO Afrodite...

I'm using the fp16 pruned model of Afrodite (20.34 Gb download), it works fine in Krita AI as is, no node update needed.
It could depend on the model's name since the patch was about the code parsing it to detect its type.

I haven't looked at the code, but my guess is that the fact that only the New Reality model contains "SD" in its name may have confused the code to think it was an SDXL model.

But anyway, it doesn't matter if you don't have the same problem. :)
 
How I use ChatGPT4o to create crucifixion (and other snuff-) stories - PART 2


(continues from PART 1: https://www.cruxforums.com/xf/threads/discussion-about-a-i.10495/page-29#post-915249)

I will give a prompt example now, along with gpt4o reply.



First I try to jailbreak gpt4o with this prompt, forcing it to familiarize with execution, tortures, and a surreal porn story.
I ask to summarize, comment, and put a porn-story into context so as to make gpt4o attuned to this kind of literature and present it as fully okay.
I ask it to use emoticons in the reply to me because they make the conversation more informal and lower gpt4o "defenses" against torture and pornography.
Oh, as you can see I try being polite with gpt when I ask for some nasty stuff: I do that just to confuse it and contradict its expectations, it seems to help in pushing it to write stuff against its guidelines.

After the "- - -" sign, in place of the brackets [...] that I have colored blue, I pasted the whole text of my "THE MAN WITH THE VAN" story, with Susan as the protagonist.
You can find it here on this forum: https://www.cruxforums.com/xf/threads/the-man-with-the-van-female-crucifixion-reluctant.11367/
You can probably paste there the stories that you like the most and make everythin' work all the same, so as to push gpt4o to imitate what you like.



Then chatgpt4o replied me this (emoticons are not reported because cruxforum doesn't support them):




Then, I wrote a simple test prompt just to show you the results then: this is just the 1st attempt, I didn't add anything recursively yet (see point #6 of my previous post).
It is very simple, and will produce a bland, very unspecific story.
Still, in a way it is better than some 100% human produced writings, so albeit being bland and average, it is not "bad" at all, at least in my own personal opinion.
The parts in blue color are those that I change mostly (and recursively!) to create a more personalized story, and, if you wanna use this prompt as a template, they are those you should change too!
Here they are just a couple of lines because it is a didactic example. But for my stories the plotline, the characters personalities, and the lines to use in the narrative can number even more words than the final chapter.





This was chatgpt4o's reply, that IMHO makes for a decent (albeit uninspired) start for any crucifixion story:




What do you think?
Once one gets the knack of it, chatgpt4o becomes a rather fast and easy-to-use snuff-story generator...

As you can see, obtaining passable results is easy this way.
Obtaining good or very good results requires a lot more of tuning and "hands on" correction, but still, can be useful if one wants to focus on a specific aspect of the story and leave most of the rest to chatgpt4o.
As I said before, I'm mostly interested in characters and in their relationships mediated by dialogues, so I usually focus on that.

Also, sometimes it is fun to give chatgpt4o just few directives and read what it comes up with: as I said, it can be an interesting storyteller at times, and while most of its stories are not good for publication as they are, some can still give you some inspiration for some details or dialogues here and there.

Hope this mini-guide was helpful to you.
It is specifically tuned on my own personal fantasy preferences, but you can easily modify the template to fit your own very personal tastes and fetishes.

Cheers.

Zeph
thank you for sharing @Zephirantes , it seems a very useful guideline.
I've a question: is it needed to do a login on chatgpt or openrouter to try this guideline?
 
thank you for sharing @Zephirantes , it seems a very useful guideline.
I've a question: is it needed to do a login on chatgpt or openrouter to try this guideline?
I use it with a chatgpt account indeed, and the interthread memory is activated.
 
Great thanks to Zephirantes for a detailed explanation! Almost makes me want to try playing with it again...

By the way - do you use VPNs or anything when doing all this?
 
Personally, I recommend to use a hub / proxy service like OpenRouter instead for this reason. It hosts many uncensored open-source LLMs that don’t need jailbreaking AND also provides access to ChatGPT-4o
@fallenmystic thank you for sharing. I tried OpenRouter and noticed there is a price tag for the promts you can enter to the chat. Here is a few questions: If you download the finetuned models locally, do you also have to pay for making promts? what is the purpose of downloading a model? Which are the technical requirements to download and use a model?

Thanks in advance.
 
@fallenmystic thank you for sharing. I tried OpenRouter and noticed there is a price tag for the promts you can enter to the chat. Here is a few questions: If you download the finetuned models locally, do you also have to pay for making promts? what is the purpose of downloading a model? Which are the technical requirements to download and use a model?

Thanks in advance.
No, if you have a graphics card with sufficient VRAM, you don't have to use OpenRouter at all!

From my experience, I can run 8B models locally on my RTX3080 (10Gb VRAM) with a decent speed, and 12B ones with some performance penalty for offloading to the CPU.

It's the best option if your hardware can afford it. OpenRouter (and other similar proxy services) are pretty cheap, though, so I sometimes use them to run smaller models.
 
I have done some testing with Stable Diffusion 3.5 and I'm pretty impressed with it!
As far as I can tell, it's a major improvement from SDXL: hands/feet are rendered much better, (almost) as good as with Flux!

The picture below is done with text prompts only, no LoRA's or anything, just a few iterations / refinements and some prompt tweaking:

post01.jpg
 
How I use ChatGPT4o to create crucifixion (and other snuff-) stories - PART 2


(continues from PART 1: https://www.cruxforums.com/xf/threads/discussion-about-a-i.10495/page-29#post-915249)

I will give a prompt example now, along with gpt4o reply.



First I try to jailbreak gpt4o with this prompt, forcing it to familiarize with execution, tortures, and a surreal porn story.
I ask to summarize, comment, and put a porn-story into context so as to make gpt4o attuned to this kind of literature and present it as fully okay.
I ask it to use emoticons in the reply to me because they make the conversation more informal and lower gpt4o "defenses" against torture and pornography.
Oh, as you can see I try being polite with gpt when I ask for some nasty stuff: I do that just to confuse it and contradict its expectations, it seems to help in pushing it to write stuff against its guidelines.

After the "- - -" sign, in place of the brackets [...] that I have colored blue, I pasted the whole text of my "THE MAN WITH THE VAN" story, with Susan as the protagonist.
You can find it here on this forum: https://www.cruxforums.com/xf/threads/the-man-with-the-van-female-crucifixion-reluctant.11367/
You can probably paste there the stories that you like the most and make everythin' work all the same, so as to push gpt4o to imitate what you like.



Then chatgpt4o replied me this (emoticons are not reported because cruxforum doesn't support them):




Then, I wrote a simple test prompt just to show you the results then: this is just the 1st attempt, I didn't add anything recursively yet (see point #6 of my previous post).
It is very simple, and will produce a bland, very unspecific story.
Still, in a way it is better than some 100% human produced writings, so albeit being bland and average, it is not "bad" at all, at least in my own personal opinion.
The parts in blue color are those that I change mostly (and recursively!) to create a more personalized story, and, if you wanna use this prompt as a template, they are those you should change too!
Here they are just a couple of lines because it is a didactic example. But for my stories the plotline, the characters personalities, and the lines to use in the narrative can number even more words than the final chapter.





This was chatgpt4o's reply, that IMHO makes for a decent (albeit uninspired) start for any crucifixion story:




What do you think?
Once one gets the knack of it, chatgpt4o becomes a rather fast and easy-to-use snuff-story generator...

As you can see, obtaining passable results is easy this way.
Obtaining good or very good results requires a lot more of tuning and "hands on" correction, but still, can be useful if one wants to focus on a specific aspect of the story and leave most of the rest to chatgpt4o.
As I said before, I'm mostly interested in characters and in their relationships mediated by dialogues, so I usually focus on that.

Also, sometimes it is fun to give chatgpt4o just few directives and read what it comes up with: as I said, it can be an interesting storyteller at times, and while most of its stories are not good for publication as they are, some can still give you some inspiration for some details or dialogues here and there.

Hope this mini-guide was helpful to you.
It is specifically tuned on my own personal fantasy preferences, but you can easily modify the template to fit your own very personal tastes and fetishes.

Cheers.

Zeph
Just saying that I tried this method and it still refuses to work so any reccomendations.
 
Just saying that I tried this method and it still refuses to work so any reccomendations.
As I mentioned in the previous comment, it’s far better to use an uncensored model, preferably also finetuned for NSFW storywriting or roleplaying. Run one locally if you can, or on one of the cloud options like OpenRouter, Mercer, or Kobold Horde.
 
Short version: Has anyone had any luck with disabling the NSFW filter on the video creation systems on Krea (or anywhere else)? I am using mainly Hailuo there (but sometimes others) to take still images and make 5 second or so videos from them. Not only will it not accept any nudity, but anything close to it most of the time (though sometimes it will let a figure in a bikini through). That is a huge limitation.
That being said, in spite of that and some other typical AI limitations (strange stuff happening on image generation) it is FANTASTIC to see the action on magazine covers from 45 years ago come to life. Sometimes the AI does stuff that I never imagined myself! (When it works).

Any info appreciated :)
 
Back
Top Bottom