Rakesh Chavhan commited on
Commit
6a6fd6e
1 Parent(s): 8875fed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -272
README.md CHANGED
@@ -1,272 +1 @@
1
- <p align="center">
2
- <img src="assets/realesrgan_logo.png" height=120>
3
- </p>
4
-
5
- ## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>
6
-
7
- <div align="center">
8
-
9
- 👀[**Demos**](#-demos-videos) **|** 🚩[**Updates**](#-updates) **|** ⚡[**Usage**](#-quick-inference) **|** 🏰[**Model Zoo**](docs/model_zoo.md) **|** 🔧[Install](#-dependencies-and-installation) **|** 💻[Train](docs/Training.md) **|** ❓[FAQ](docs/FAQ.md) **|** 🎨[Contribution](docs/CONTRIBUTING.md)
10
-
11
- [![download](https://img.shields.io/github/downloads/xinntao/Real-ESRGAN/total.svg)](https://github.com/xinntao/Real-ESRGAN/releases)
12
- [![PyPI](https://img.shields.io/pypi/v/realesrgan)](https://pypi.org/project/realesrgan/)
13
- [![Open issue](https://img.shields.io/github/issues/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
14
- [![Closed issue](https://img.shields.io/github/issues-closed/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
15
- [![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE)
16
- [![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)
17
- [![Publish-pip](https://github.com/xinntao/Real-ESRGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)
18
-
19
- </div>
20
-
21
- 🔥 **AnimeVideo-v3 model (动漫视频小模型)**. Please see [[*anime video models*](docs/anime_video_model.md)] and [[*comparisons*](docs/anime_comparisons.md)]<br>
22
- 🔥 **RealESRGAN_x4plus_anime_6B** for anime images **(动漫插图模型)**. Please see [[*anime_model*](docs/anime_model.md)]
23
-
24
- <!-- 1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B) -->
25
- 1. :boom: **Update** online Replicate demo: [![Replicate](https://img.shields.io/static/v1?label=Demo&message=Replicate&color=blue)](https://replicate.com/xinntao/realesrgan)
26
- 1. Online Colab demo for Real-ESRGAN: [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) **|** Online Colab demo for for Real-ESRGAN (**anime videos**): [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing)
27
- 1. Portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. You can find more information [here](#portable-executable-files-ncnn). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
28
- <!-- 1. You can watch enhanced animations in [Tencent Video](https://v.qq.com/s/topic/v_child/render/fC4iyCAM.html). 欢迎观看[腾讯视频动漫修复](https://v.qq.com/s/topic/v_child/render/fC4iyCAM.html) -->
29
-
30
- Real-ESRGAN aims at developing **Practical Algorithms for General Image/Video Restoration**.<br>
31
- We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
32
-
33
- 🌌 Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in [feedback.md](docs/feedback.md).
34
-
35
- ---
36
-
37
- If Real-ESRGAN is helpful, please help to ⭐ this repo or recommend it to your friends 😊 <br>
38
- Other recommended projects:<br>
39
- ▶️ [GFPGAN](https://github.com/TencentARC/GFPGAN): A practical algorithm for real-world face restoration <br>
40
- ▶️ [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox<br>
41
- ▶️ [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions.<br>
42
- ▶️ [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison <br>
43
- ▶️ [HandyFigure](https://github.com/xinntao/HandyFigure): Open source of paper figures <br>
44
-
45
- ---
46
-
47
- ### 📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
48
-
49
- > [[Paper](https://arxiv.org/abs/2107.10833)] &emsp; [[YouTube Video](https://www.youtube.com/watch?v=fxHWoDSSvSc)] &emsp; [[B站讲解](https://www.bilibili.com/video/BV1H34y1m7sS/)] &emsp; [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)] &emsp; [[PPT slides](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]<br>
50
- > [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>
51
- > [Tencent ARC Lab](https://arc.tencent.com/en/ai-demos/imgRestore); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
52
-
53
- <p align="center">
54
- <img src="assets/teaser.jpg">
55
- </p>
56
-
57
- ---
58
-
59
- <!---------------------------------- Updates --------------------------->
60
- ## 🚩 Updates
61
-
62
- - ✅ Add the **realesr-general-x4v3** model - a tiny small model for general scenes. It also supports the **-dn** option to balance the noise (avoiding over-smooth results). **-dn** is short for denoising strength.
63
- - ✅ Update the **RealESRGAN AnimeVideo-v3** model. Please see [anime video models](docs/anime_video_model.md) and [comparisons](docs/anime_comparisons.md) for more details.
64
- - ✅ Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md).
65
- - ✅ Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
66
- - ✅ Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
67
- - ✅ Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](docs/Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
68
- - ✅ Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**.
69
- - ✅ Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391)
70
- - ✅ Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.
71
- - ✅ [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
72
- - ✅ The training codes have been released. A detailed guide can be found in [Training.md](docs/Training.md).
73
-
74
- ---
75
-
76
- <!---------------------------------- Demo videos --------------------------->
77
- ## 👀 Demos Videos
78
-
79
- #### Bilibili
80
-
81
- - [大闹天宫片段](https://www.bilibili.com/video/BV1ja41117zb)
82
- - [Anime dance cut 动漫魔性舞蹈](https://www.bilibili.com/video/BV1wY4y1L7hT/)
83
- - [海贼王片段](https://www.bilibili.com/video/BV1i3411L7Gy/)
84
-
85
- #### YouTube
86
-
87
- ## 🔧 Dependencies and Installation
88
-
89
- - Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
90
- - [PyTorch >= 1.7](https://pytorch.org/)
91
-
92
- ### Installation
93
-
94
- 1. Clone repo
95
-
96
- ```bash
97
- git clone https://github.com/xinntao/Real-ESRGAN.git
98
- cd Real-ESRGAN
99
- ```
100
-
101
- 1. Install dependent packages
102
-
103
- ```bash
104
- # Install basicsr - https://github.com/xinntao/BasicSR
105
- # We use BasicSR for both training and inference
106
- pip install basicsr
107
- # facexlib and gfpgan are for face enhancement
108
- pip install facexlib
109
- pip install gfpgan
110
- pip install -r requirements.txt
111
- python setup.py develop
112
- ```
113
-
114
- ---
115
-
116
- ## ⚡ Quick Inference
117
-
118
- There are usually three ways to inference Real-ESRGAN.
119
-
120
- 1. [Online inference](#online-inference)
121
- 1. [Portable executable files (NCNN)](#portable-executable-files-ncnn)
122
- 1. [Python script](#python-script)
123
-
124
- ### Online inference
125
-
126
- 1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B)
127
- 1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN **|** [Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**).
128
-
129
- ### Portable executable files (NCNN)
130
-
131
- You can download [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**.
132
-
133
- This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.<br>
134
-
135
- You can simply run the following command (the Windows example, more information is in the README.md of each executable files):
136
-
137
- ```bash
138
- ./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name
139
- ```
140
-
141
- We have provided five models:
142
-
143
- 1. realesrgan-x4plus (default)
144
- 2. realesrnet-x4plus
145
- 3. realesrgan-x4plus-anime (optimized for anime images, small model size)
146
- 4. realesr-animevideov3 (animation video)
147
-
148
- You can use the `-n` argument for other models, for example, `./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`
149
-
150
- #### Usage of portable executable files
151
-
152
- 1. Please refer to [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages) for more details.
153
- 1. Note that it does not support all the functions (such as `outscale`) as the python script `inference_realesrgan.py`.
154
-
155
- ```console
156
- Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
157
-
158
- -h show this help
159
- -i input-path input image path (jpg/png/webp) or directory
160
- -o output-path output image path (jpg/png/webp) or directory
161
- -s scale upscale ratio (can be 2, 3, 4. default=4)
162
- -t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
163
- -m model-path folder path to the pre-trained models. default=models
164
- -n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
165
- -g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
166
- -j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
167
- -x enable tta mode"
168
- -f format output image format (jpg/png/webp, default=ext/png)
169
- -v verbose output
170
- ```
171
-
172
- Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
173
-
174
- ### Python script
175
-
176
- #### Usage of python script
177
-
178
- 1. You can use X4 model for **arbitrary output size** with the argument `outscale`. The program will further perform cheap resize operation after the Real-ESRGAN output.
179
-
180
- ```console
181
- Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
182
-
183
- A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
184
-
185
- -h show this help
186
- -i --input Input image or folder. Default: inputs
187
- -o --output Output folder. Default: results
188
- -n --model_name Model name. Default: RealESRGAN_x4plus
189
- -s, --outscale The final upsampling scale of the image. Default: 4
190
- --suffix Suffix of the restored image. Default: out
191
- -t, --tile Tile size, 0 for no tile during testing. Default: 0
192
- --face_enhance Whether to use GFPGAN to enhance face. Default: False
193
- --fp32 Use fp32 precision during inference. Default: fp16 (half precision).
194
- --ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
195
- ```
196
-
197
- #### Inference general images
198
-
199
- Download pre-trained models: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
200
-
201
- ```bash
202
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P weights
203
- ```
204
-
205
- Inference!
206
-
207
- ```bash
208
- python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance
209
- ```
210
-
211
- Results are in the `results` folder
212
-
213
- #### Inference anime images
214
-
215
- <p align="center">
216
- <img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png">
217
- </p>
218
-
219
- Pre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)<br>
220
- More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
221
-
222
- ```bash
223
- # download model
224
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights
225
- # inference
226
- python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
227
- ```
228
-
229
- Results are in the `results` folder
230
-
231
- ---
232
-
233
- ## BibTeX
234
-
235
- @InProceedings{wang2021realesrgan,
236
- author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
237
- title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
238
- booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
239
- date = {2021}
240
- }
241
-
242
- ## 📧 Contact
243
-
244
- If you have any question, please email `[email protected]` or `[email protected]`.
245
-
246
- <!---------------------------------- Projects that use Real-ESRGAN --------------------------->
247
- ## 🧩 Projects that use Real-ESRGAN
248
-
249
- If you develop/use Real-ESRGAN in your projects, welcome to let me know.
250
-
251
- - NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)
252
- - VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)
253
- - NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
254
-
255
- &nbsp;&nbsp;&nbsp;&nbsp;**GUI**
256
-
257
- - [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)
258
- - [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)
259
- - [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)
260
- - [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)
261
- - [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)
262
- - [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)
263
- - [Upscayl](https://github.com/upscayl/upscayl) by [Nayam Amarshe](https://github.com/NayamAmarshe) and [TGS963](https://github.com/TGS963)
264
-
265
- ## 🤗 Acknowledgement
266
-
267
- Thanks for all the contributors.
268
-
269
- - [AK391](https://github.com/AK391): Integrate RealESRGAN to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN).
270
- - [Asiimoviet](https://github.com/Asiimoviet): Translate the README.md to Chinese (中文).
271
- - [2ji3150](https://github.com/2ji3150): Thanks for the [detailed and valuable feedbacks/suggestions](https://github.com/xinntao/Real-ESRGAN/issues/131).
272
- - [Jared-02](https://github.com/Jared-02): Translate the Training.md to Chinese (中文).
 
1
+ Image Enhancer Using Real-ESRGAN