File size: 4,095 Bytes
738dad6
fa647ba
 
738dad6
fa647ba
 
 
 
 
 
 
 
738dad6
511810e
9cc29ea
fa647ba
 
 
511810e
fa647ba
87e70de
fa647ba
 
 
 
 
 
87e70de
fa647ba
 
 
 
87e70de
fa647ba
 
 
87e70de
fa647ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87e70de
fa647ba
 
 
 
 
 
 
 
 
 
 
 
 
0fc397a
 
 
 
 
 
 
41e909b
0fc397a
 
fa647ba
87e70de
fa647ba
 
87e70de
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
language:
- en
license: mit
tags:
- chemistry
- SMILES
- product
datasets:
- ORD
metrics:
- accuracy
---
# ⚠️This is an old version of [ReactionT5v2-forward](https://huggingface.co/sagawa/ReactionT5v2-forward). Prediction accuracy is worse.⚠️




# Model Card for ReactionT5v1-forward

This is a ReactionT5 pre-trained to predict the products of reactions. 

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/sagawatatsuya/ReactionT5
- **Paper:** https://arxiv.org/abs/2311.06708

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
You can use this model for forward reaction prediction or fine-tune this model with your dataset.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import AutoTokenizer, T5ForConditionalGeneration

tokenizer = AutoTokenizer.from_pretrained('sagawa/ReactionT5-product-prediction')
inp = tokenizer('REACTANT:COC(=O)C1=CCCN(C)C1.O.[Al+3].[H-].[Li+].[Na+].[OH-]REAGENT:C1CCOC1', return_tensors='pt')
model = T5ForConditionalGeneration.from_pretrained('sagawa/ReactionT5-product-prediction')
output = model.generate(**inp, min_length=6, max_length=109, num_beams=1, num_return_sequences=1, return_dict_in_generate=True, output_scores=True)
output = tokenizer.decode(output['sequences'][0], skip_special_tokens=True).replace(' ', '').rstrip('.')
output # 'O=S(=O)([O-])[O-].O=S(=O)([O-])[O-].O=S(=O)([O-])[O-].[Cr+3].[Cr+3]'
```

## Training Details

### Training Procedure 

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We used Open Reaction Database (ORD) dataset for model training.
The command used for training is the following. For more information, please refer to the paper and GitHub repository.

```python
python train.py \
    --epochs=100 \
    --batch_size=32 \
    --data_path='../data/all_ord_reaction_uniq_with_attr_v3.csv' \
    --use_reconstructed_data \
    --pretrained_model_name_or_path='sagawa/CompoundT5'
```

### Results


| Model                | Training set              | Test set | Top-1 [% acc.] | Top-2 [% acc.] | Top-3 [% acc.] | Top-5 [% acc.] |
|----------------------|---------------------------|----------|----------------|----------------|----------------|----------------|
| Sequence-to-sequence | USPTO                     | USPTO    | 80.3           | 84.7           | 86.2           | 87.5           |
| WLDN                 | USPTO                     | USPTO    | 80.6 (85.6)    | 90.5           | 92.8           | 93.4           |
| Molecular Transformer| USPTO                     | USPTO    | 88.8           | 92.6           | –              | 94.4           |
| T5Chem               | USPTO                     | USPTO    | 90.4           | 94.2           | –              | 96.4           |
| CompoundT5           | USPTO                     | USPTO    | 88.0           | 92.4           | 93.9           | 95.0           |
| ReactionT5           | -                       | USPTO    | 0.0 <85.0>     | 0.0 <90.6>     | 0.0 <92.3>     | 0.0 <93.8>     |

Performance comparison of Compound T5, ReactionT5, and other models in product prediction. The values enclosed in ‘<>’ in the table represent the scores of the model that was fine-tuned on 200 reactions from the USPTO dataset. The score enclosed in ‘()’ is the one reported in the original paper. 

## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
arxiv link: https://arxiv.org/abs/2311.06708
```
@misc{sagawa2023reactiont5,  
      title={ReactionT5: a large-scale pre-trained model towards application of limited reaction data}, 
      author={Tatsuya Sagawa and Ryosuke Kojima},  
      year={2023},  
      eprint={2311.06708},  
      archivePrefix={arXiv},  
      primaryClass={physics.chem-ph}  
}
```