jrrjrr's picture
Upload README.md
dbbdac6
metadata
license: creativeml-openrail-m
tags:
  - coreml
  - stable-diffusion
  - text-to-image
  - not-for-all-audiences

Core ML Converted SDXL Model:

  • This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
  • Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
  • original version is only compatible with CPU & GPU option.
  • Resolution is the SDXL default of 1024x1024.
  • This model was converted with a vae-encoder for use with image2image.
  • This model is quantized to 8-bits.
  • Descriptions are posted as-is from original model source.
  • Not all features and/or results may be available in CoreML format.
  • This model does not have the unet split into chunks.
  • This model does not include a safety checker (for NSFW content).
  • This model can not be used with ControlNet.

DreamShaper-XL1.0-Alpha2_SDXL_8-bit:

Source(s): CivitAI

This is an SDXL base model converted and quantized to 8-bits.

Finetuned over SDXL1.0.

Even if this is still an alpha version, I think it's already much better compared to the first alpha based on xl0.9.

Basically I do the first gen with DreamShaperXL, then I upscale to 2x and finally I do an img2img step with either DreamShaperXL itself, or a 1.5 model that I find suited, such as DreamShaper7 or AbsoluteReality.

What does it do better than SDXL1.0?

No need for refiner. Just do highres fix (upscale+i2i)

Better looking people

Less blurry edges

75% better dragons 🐉

Better NSFW<br><br>

image

image

image

image