

Golly gee, dev. I know you’re providing a completely free service and ads barely cover provided services, but go ahead and add these extra features that will raise costs further.
It’s free! What the hell? 🙄


Golly gee, dev. I know you’re providing a completely free service and ads barely cover provided services, but go ahead and add these extra features that will raise costs further.
It’s free! What the hell? 🙄


It will be back when it’s back.


Ew. Why would you want to add something that encourages hate?


Lol


As I understand it (please, correct me if I’m wrong), aren’t gallery posts global? From what little I know, a named gallery (default is “public”) can be any name. That name becomes the identifier for that gallery. This means that anyone may use that same name and access that gallery from their own generator. If they’re public like that, how would this removal power work. Since you would affectively have the right to remove gallery images that don’t actually belong to the generator?


Well, I did see an em-dash in there…
Anyway, I do understand the sentiment. It’s not very fun being left in the dark, especially when we’re not even given a chance to participate.


That’s a strange question from someone with a “legal background”, especially since this is readily available information.


What you are suggesting is something called an “editor phase”, some COT (chain of thought, commonly known as “thinking” models) do this to an extent. It’s also something that can be done via JavaScript in the current AAC right now. To do this in AAC, you can either fork the current chat and make your own changes, or you can leverage the JavaScript function on characters to have the AI respond twice. Once to perform a standard completion, and the next, passing the entire chat again, this time including the last response along with instructions on what to “edit”. The AI makes a new response, and you replace the last response with the modified one. AI isn’t actually capable of self-analysis or of thinking about what its going to do as its responding, so to help mitigate this issue, you must break things into steps.


AI is stateless, which means it’s seeing everything for the first time every time it responds. Writing instructions, characters, lore, reminders, your response, chat history, all of it is just sent as a big block of text with little header text for each section. The AI then has to parse through all of it, try to make sense of it, and then come up with a response to send back to you. What you’re asking for is just another block of text to send along with everything else. Really no different than a reminder.
This is a simple limitation of the current AI model and it will improve (probably by a fairly good amount) with the text upgrade, but it still won’t be perfect because AI doesn’t actually understand.
You’re providing the AI with information that sounds like China, which means the AI is going to look at the data it, see stuff that sounds like China, and that stuff is related to China, so it’s going to talk about China.


No, current image gen does not support image input as reference. It’s a text-to-image only system.


When it’s back… I imagine they are working as quickly as the realistically can.


I could see you being upset if perchance was some paid service with deadlines and shit. But, it’s not. It’s free. What do you even have to be upset about? Besides you’re own entertainment, what do you even have invested on perchance?


The token length (context window) is not directly linked to the model currently in use. There also needs to be enough VRAM, which costs more money to host. Unless the dev finds some way to reduce VRAM usage and/or finds a better deal for hosting with more VRAM, then the context isn’t going to be increased by just changing the model used. Also, where did you hear it’s going to be Llama 3 or 3.3? Neither of those are much of an upgrade.


When the dev finishes… I mean, what are you expecting? It’s a free service developed primarily by one person. I’m just happy they still care enough to attempt keeping things up-to-date.


It’s using Chroma which is a custom version of Flux.1-Schnell. I believe it’s one of the recent snapshots of training data which somewhere around ~40 of 50.


Context tokens are not directly based on the model unless talking about very large token counts. Context tokens come from the amount of memory it is given to work with, which costs more to host. Some models (not any of the Llama models) are better at managing this memory, but it still uses a lot of resources to host a large token context.


If I remember correctly, your generator is inside an iframe, and pretty heavily isolated. Most likely, the mouse gesture is being caught by the main outer document and not being passed to the inner iframe. It should be possible to tie into window and manually forward the events to the inner frame, but I’m not yet knowledgeable enough in JavaScript to give a definitive answer.


If you create an account (completely free), the ads go away. And, if you do allow ads, it’s only a banner at the bottom of the page. Also, ads only appear on pages that use AI plugins (image gen, AI text).


Are you meaning you want the AI to look at the image and generate CSS based on the common colors in the image? If that’s the case, then no: there is no image to text option in Perchance. There are a couple generators the use external AI models hosted on other sites for similar tasks, but nothing at Perchance.
If you’re wanting another character, why not just add another character? Make a character, then add them to the chat. Use System for system things.