How come nobody noticed this!?

#1
by BoshiAI - opened

Hey there!

I can't believe you've quantized a version of Mythalion with Kimiko v2 which supports a 8K context and nobody's noticed it!?

For that matter I'm surprised nobody noticed there was a Mythalion Kimiko v2 out at all!

Given that Kimiko is routinely rolled in with MythoMax to improve it (so common to see people pushing MythoMax Kimiko over the original), and Mythalion is considered a step up from MythoMax too, i'm surprised nobody noticed a Mythalion blended with Kimiko!

How was a 8K context acheved with this? I thought MythoMax only supported 4K? Or was it capable of more all along? Is it the addition of Kimiko that makes a difference here or did you do something else to come up with the 8K GGUF? Have you actively used this model yourself and how did you find it against MythoMax, Mythalion and other models around at the time / since?

I've been using the 8K GGUF of this and it's working great under Faraday for me so far! It seems more soulful/warm in SFW and a little spicier in NSFW. Would love to hear from you on this model, and surprised it hasn't received more attention!

Okay this is now OFFICIALLY my favourite model! It's warmer and more creative in SFW, spicer and more creative in NSFW, compared to MythoMax. It seems to track character card data better, too. Plus I can use 8K context? What's not to like!? I'm off to tell others about this model now lol.

Alright, I am going to try this model out, If I remember I will give it a casual review ;)

I look forward to reading it!
For me, it's a definite step up from MythoMax, with the. combination of Pygmalion and Kimiko together undoubtedly helping. SFW felt warmer and more creative, NSFW felt spicier and more creative too. I've read that Pygmalion increases creativity, Kimiko improves NSFW and intelligence (inc things like tracking a character card better) and makes a model behave more like an RP model.
For me, it's a great combo, especially with the 8K context in the 8K GGUF quant actionpace has created (though i'd still like to know how that 8k comes about from a tech pov.)

So, I was really excited to play with this, as I've been liking Mythalion-Kimiko-v2 a lot, but the 4k context can get a little limiting sometimes. However, when I ran the 8k context, it said it was only 4k. Looking in the repo, it has the same SHA hash as the 4k version.

Are you sure it's actually 8k? Testing with LM Studio, I get garbage out if I go over 4k with the 8k context model. (And they're literally the same file, near as I can tell.)

Sign up or log in to comment