DOFOFFICIAL commited on
Commit
875a1ff
1 Parent(s): 4d2e9d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -32,11 +32,11 @@ library_name: transformers
32
 
33
  - Modification: This model, animeGender-dvgg-0.7, uses all weights from the original vgg-16 model, but has changed the network structure of the last sequantial, the dense layers, which means we have modified it into a binary classification model with two nodes (activated by a softmax layer) output the possibility of each gender, namely female, and male. Note that although the overall network structure, particularly the convolutional neural layers have been left untrained, in the future, we have planned to deeply modify this base model, vgg16, to achieve a higher score and precision in this classification task.
34
 
35
- - Input: While the original model vgg-16 has been designed with an input with 224*224 in terms of resolution, and 3 dementions in RGB colorspace, in our model animeGender-dvgg-0.7, we aim to only use 64*64 with RGB colorspace only, as the classification task is not too tough. Please note when feeding a picture into the model, please ensure that the input illustration only consists of the head and face of the character you want to identify, in order to make the result from the model most precise and reliable. Moreover, we have designed some Python functions in our open source codes to help you resize, crop, and transform your pictures into 64*64 RGB ones, and more information is available in the file folder.
36
 
37
  - Output: This model, animeGender-dvgg-0.7, has an original output with a one-dim tensor, which length is 2, respectively shows the possibilities of each result of your input, namely female and male. In our open source usage example, see in the file folder, we have conveniently transformed the raw output into a readable result, for example, "male", with a numerical number showing the possibility, or the confidence. Note that our model does not have the background knowledge of a certain character, or the context of an animation, so some gender-neutral characters may still be misclassified, or correctly matched but with a confidence that is around 0.5.
38
 
39
- - Checkpoint: We have provided the final and proposal model with the name "animeGender-dvgg-0.7.pth", however, to satisfy some further requirements, for example, research, we have provided checkpoints on process, while they have been proved to have an inferior capability compared to the proposed model. More models available, please see the "more-models" folder in the file folder.
40
 
41
 
42
 
@@ -102,7 +102,7 @@ library_name: transformers
102
  ### [Usage]
103
 
104
  - We have uploaded the usage with Python in the file folder, and please note you should download them and run locally using either your CPU or with CUDA.
105
- - Note that ".pth" model should be used with the pre-defined function modelload() in the provided codes, but ".safetensors" model can be easily loaded with the function torch.load().
106
 
107
 
108
 
 
32
 
33
  - Modification: This model, animeGender-dvgg-0.7, uses all weights from the original vgg-16 model, but has changed the network structure of the last sequantial, the dense layers, which means we have modified it into a binary classification model with two nodes (activated by a softmax layer) output the possibility of each gender, namely female, and male. Note that although the overall network structure, particularly the convolutional neural layers have been left untrained, in the future, we have planned to deeply modify this base model, vgg16, to achieve a higher score and precision in this classification task.
34
 
35
+ - Input: While the original model vgg-16 has been designed with an input with 224 * 224 in terms of resolution, and 3 dementions in RGB colorspace, in our model animeGender-dvgg-0.7, we aim to only use 64 * 64 with RGB colorspace only, as the classification task is not too tough. Please note when feeding a picture into the model, please ensure that the input illustration only consists of the head and face of the character you want to identify, in order to make the result from the model most precise and reliable. Moreover, we have designed some Python functions in our open source codes to help you resize, crop, and transform your pictures into 64*64 RGB ones, and more information is available in the file folder.
36
 
37
  - Output: This model, animeGender-dvgg-0.7, has an original output with a one-dim tensor, which length is 2, respectively shows the possibilities of each result of your input, namely female and male. In our open source usage example, see in the file folder, we have conveniently transformed the raw output into a readable result, for example, "male", with a numerical number showing the possibility, or the confidence. Note that our model does not have the background knowledge of a certain character, or the context of an animation, so some gender-neutral characters may still be misclassified, or correctly matched but with a confidence that is around 0.5.
38
 
39
+ - Checkpoint: We have provided the final and proposal model with the name "animeGender-dvgg-0.7", however, to satisfy some further requirements, for example, research, we have provided checkpoints on process, while they have been proved to have an inferior capability compared to the proposed model. More models available, please see the "more-models" folder in the file folder.
40
 
41
 
42
 
 
102
  ### [Usage]
103
 
104
  - We have uploaded the usage with Python in the file folder, and please note you should download them and run locally using either your CPU or with CUDA.
105
+ - Note that ".pth" models should be used with the pre-defined function modelload() in the provided codes, but ".safetensors" models can otherwise be easily loaded with the function torch.load().
106
 
107
 
108