jemartin commited on
Commit
f239b77
·
verified ·
1 Parent(s): 0e1c449

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ model_name: emotion-ferplus-12-int8.onnx
5
+ tags:
6
+ - validated
7
+ - vision
8
+ - body_analysis
9
+ - emotion_ferplus
10
+ ---
11
+ <!--- SPDX-License-Identifier: MIT -->
12
+
13
+ # FER+ Emotion Recognition
14
+
15
+ ## Description
16
+ This model is a deep convolutional neural network for emotion recognition in faces.
17
+
18
+ ## Model
19
+
20
+ | Model | Download | Download (with sample test data) | ONNX version | Opset version |
21
+ |----------------|:-----------|:-----------|:--------|:-------------|
22
+ |Emotion FERPlus |[34 MB](model/emotion-ferplus-2.onnx)|[31 MB](model/emotion-ferplus-2.tar.gz)|1.0|2|
23
+ |Emotion FERPlus |[34 MB](model/emotion-ferplus-7.onnx)|[31 MB](model/emotion-ferplus-7.tar.gz)|1.2|7|
24
+ |Emotion FERPlus |[34 MB](model/emotion-ferplus-8.onnx)|[31 MB](model/emotion-ferplus-8.tar.gz)|1.3|8|
25
+ |Emotion FERPlus int8 |[19 MB](model/emotion-ferplus-12-int8.onnx)|[18 MB](model/emotion-ferplus-12-int8.tar.gz)|1.14|12|
26
+
27
+ ### Paper
28
+ "Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution" [arXiv:1608.01041](https://arxiv.org/abs/1608.01041)
29
+
30
+ ### Dataset
31
+ The model is trained on the FER+ annotations for the standard Emotion FER [dataset](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data), as described in the above paper.
32
+
33
+ ### Source
34
+ The model is trained in CNTK, using the cross entropy training mode. You can find the source code [here](https://github.com/ebarsoum/FERPlus).
35
+
36
+ ### Demo
37
+ [Run Emotion_FERPlus in browser](https://microsoft.github.io/onnxjs-demo/#/emotion_ferplus) - implemented by ONNX.js with Emotion_FERPlus version 1.2
38
+
39
+ ## Inference
40
+ ### Input
41
+ The model expects input of the shape `(Nx1x64x64)`, where `N` is the batch size.
42
+ ### Preprocessing
43
+ Given a path `image_path` to the image you would like to score:
44
+ ```python
45
+ import numpy as np
46
+ from PIL import Image
47
+
48
+ def preprocess(image_path):
49
+ input_shape = (1, 1, 64, 64)
50
+ img = Image.open(image_path)
51
+ img = img.resize((64, 64), Image.ANTIALIAS)
52
+ img_data = np.array(img)
53
+ img_data = np.resize(img_data, input_shape)
54
+ return img_data
55
+ ```
56
+
57
+ ### Output
58
+ The model outputs a `(1x8)` array of scores corresponding to the 8 emotion classes, where the labels map as follows:
59
+ `emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}`
60
+ ### Postprocessing
61
+ Route the model output through a softmax function to map the aggregated activations across the network to probabilities across the 8 classes.
62
+
63
+ ```python
64
+ import numpy as np
65
+
66
+ def softmax(scores):
67
+ # your softmax function
68
+
69
+ def postprocess(scores):
70
+ '''
71
+ This function takes the scores generated by the network and returns the class IDs in decreasing
72
+ order of probability.
73
+ '''
74
+ prob = softmax(scores)
75
+ prob = np.squeeze(prob)
76
+ classes = np.argsort(prob)[::-1]
77
+ return classes
78
+ ```
79
+ ### Sample test data
80
+ Sets of sample input and output files are provided in
81
+ * serialized protobuf TensorProtos (`.pb`), which are stored in the folders `test_data_set_*/`.
82
+
83
+ ## Quantization
84
+ Emotion FERPlus int8 is obtained by quantizing fp32 Emotion FERPlus model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization.
85
+
86
+
87
+ ### Prepare Model
88
+ Download model from [ONNX Model Zoo](https://github.com/onnx/models).
89
+
90
+ ```shell
91
+ wget https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
92
+ ```
93
+
94
+ Convert opset version to 12 for more quantization capability.
95
+
96
+ ```python
97
+ import onnx
98
+ from onnx import version_converter
99
+ model = onnx.load('emotion-ferplus-8.onnx')
100
+ model = version_converter.convert_version(model, 12)
101
+ onnx.save_model(model, 'emotion-ferplus-12.onnx')
102
+ ```
103
+
104
+ ### Model quantize
105
+
106
+ ```bash
107
+ cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static
108
+ bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
109
+ --dataset_location=/path/to/data \
110
+ --output_model=path/to/save
111
+ ```
112
+
113
+ ## License
114
+ MIT
115
+