New 7B?

#1
by Excido - opened

So should I be using this 7B model in my wrapper instead, is this the 768 target size model?

Alpha-VLLM org

So should I be using this 7B model in my wrapper instead, is this the 768 target size model?

No, this is the weight of the original Chameleon-7B (from Meta) model, used for mode initialization when reproducing the training of Lumina-mGPT.

Sign up or log in to comment