Tweak concurrent tasks limit (as more than 1 seems to break) and rearrange CMG metrics to match the leaderboard
Browse files- app.py +1 -1
- src/tasks_content.py +1 -1
app.py
CHANGED
@@ -136,5 +136,5 @@ with gr.Blocks() as demo:
|
|
136 |
)
|
137 |
|
138 |
if __name__ == "__main__":
|
139 |
-
demo.queue(default_concurrency_limit=
|
140 |
demo.launch()
|
|
|
136 |
)
|
137 |
|
138 |
if __name__ == "__main__":
|
139 |
+
demo.queue(default_concurrency_limit=1)
|
140 |
demo.launch()
|
src/tasks_content.py
CHANGED
@@ -31,8 +31,8 @@ TASKS_DESCRIPTIONS = {
|
|
31 |
Our Module-to-Text benchmark 🤗 [JetBrains-Research/lca-module-to-text](https://huggingface.co/datasets/JetBrains-Research/lca-module-to-text) includes 206 manually curated text files describing modules from different Python projects.
|
32 |
|
33 |
We use the following metrics for evaluation:
|
34 |
-
* [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge)
|
35 |
* [ChrF](https://huggingface.co/spaces/evaluate-metric/chrf)
|
|
|
36 |
* [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore)
|
37 |
* ChatGPT-Turing-Test
|
38 |
|
|
|
31 |
Our Module-to-Text benchmark 🤗 [JetBrains-Research/lca-module-to-text](https://huggingface.co/datasets/JetBrains-Research/lca-module-to-text) includes 206 manually curated text files describing modules from different Python projects.
|
32 |
|
33 |
We use the following metrics for evaluation:
|
|
|
34 |
* [ChrF](https://huggingface.co/spaces/evaluate-metric/chrf)
|
35 |
+
* [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge)
|
36 |
* [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore)
|
37 |
* ChatGPT-Turing-Test
|
38 |
|