File size: 2,280 Bytes
a008fac
 
 
b3af5eb
a008fac
 
c2ba70f
a008fac
 
c2ba70f
 
a008fac
 
 
 
c2ba70f
 
a008fac
c2ba70f
a008fac
c2ba70f
a008fac
 
c2ba70f
a008fac
c2ba70f
a008fac
 
c2ba70f
a008fac
 
c2ba70f
a008fac
 
 
 
 
c2ba70f
a008fac
 
 
c2ba70f
a008fac
 
 
 
 
c2ba70f
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
tags:
- vision-language model
- mixtral
- generation
datasets:
- YanweiLi/MGM-Instruction
---

# MGM-8x7B Model Card
<a href='https://github.com/dvlab-research/MGM'><img src='https://img.shields.io/badge/Project-Code-violet'></a>
<a href='https://mini-gemini.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> 
<a href='https://arxiv.org/pdf/2403.18814.pdf'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>

## Model details
The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously.
You can also try our other MGM series models:

Normal resolution setting: [MGM-2B](https://huggingface.co/YanweiLi/MGM-2B), [MGM-7B](https://huggingface.co/YanweiLi/MGM-7B), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B),  [MGM-34B](https://huggingface.co/YanweiLi/MGM-34B)

High resolution setting: [MGM-7B-HD](https://huggingface.co/YanweiLi/MGM-7B-HD), [MGM-13B](https://huggingface.co/YanweiLi/MGM-13B-HD), [MGM-8x7B-HD](https://huggingface.co/YanweiLi/MGM-8x7B-HD), [MGM-34B-HD](https://huggingface.co/YanweiLi/MGM-34B-HD)

**Model type:**
MGM is an open-source chatbot trained by fine-tuning Mixtral-8x7B on GPT-generated multimodal instruction-following data.

It empowers existing frameworks to support HD image understanding, reasoning, and generation simultaneously.

**Model version:**
MGM with LLM Mixtral-8x7B-Instruct-v0.1

**Model date:**
MGM-8x7B was trained on 03/2024.

## License
Mixtral-8x7B is licensed under the apache-2.0 License, 

**Where to send questions or comments about the model:**
https://github.com/dvlab-research/MGM/issues

## Intended use
**Primary intended uses:**
The primary use is research on large multimodal models and chatbots.

**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

## Training data
This model is trained based on [MGM-Instruction](https://huggingface.co/datasets/YanweiLi/MGM-Instruction) dataset, please to the [Github](https://github.com/dvlab-research/MGM) for more detail.

## Acknowledgement
This project is not affiliated with Google LLC.