miaohaiyuan commited on
Commit
78163ee
1 Parent(s): 9fbdb03

initial delivery

Browse files
.env.sample ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ OPENAI_API_KEY="sk-xxxxx" # replace "sk-xxxxx" with your secret OpenAI API key
2
+ OPENAI_BASE_URL="" # replace "" with your API base URL, eg. "https://api.openai.com"
3
+ OPENAI_MODEL = "" # replace "" with your model name, eg. deepseek-chat
.gitignore ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ cache_dir
2
+ .env
3
+ .venv
4
+ __pycache__
5
+ poetry.lock
6
+ floresp-v2.0-rc.3
7
+ *cache
8
+ wmt
.pre-commit-config.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ repos:
2
+ - repo: https://github.com/pre-commit/pre-commit-hooks
3
+ rev: v4.5.0
4
+ hooks:
5
+ - id: trailing-whitespace
6
+ exclude: tests
7
+ - id: end-of-file-fixer
8
+ - id: check-merge-conflict
9
+ - id: check-case-conflict
10
+ - id: check-json
11
+ - id: check-toml
12
+ exclude: tests/fixtures/invalid_lock/poetry\.lock
13
+ - id: check-yaml
14
+ - id: pretty-format-json
15
+ args: [--autofix, --no-ensure-ascii, --no-sort-keys]
16
+ - id: check-ast
17
+ - id: debug-statements
18
+ - id: check-docstring-first
19
+ - repo: https://github.com/astral-sh/ruff-pre-commit
20
+ rev: v0.3.5
21
+ hooks:
22
+ - id: ruff
23
+ - id: ruff-format
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2024 pisces76
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,13 +1,51 @@
1
- ---
2
- title: Translation Agent UI
3
- emoji:
4
- colorFrom: gray
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 4.36.1
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Translation Agent UI: A UI for Andrew's Translation Agent
2
+
3
+ This is a gradio UI for Andrew Ng's [Translation Agent](https://github.com/andrewyng/translation-agent). The main features are:
4
+ 1. Look inside the reflection agentic workflow during tranlation;
5
+ 2. Support other LLM models as translation engine;
6
+ 3. Highlight the differences between initial translation and final translation;
7
+
8
+ <img width="1280" src="./examples/demo-zh_en.png" alt="Preview"/>
9
+ <img width="1280" src="./examples/demo-en_zh.png" alt="Preview"/>
10
+
11
+ ## Getting Started
12
+
13
+ To get started with `translation-agent-UI`, follow these steps:
14
+
15
+ ### Installation:
16
+ - The Poetry package manager is required for installation. [Poetry Installation](https://python-poetry.org/docs/#installation) Depending on your environment, this might work:
17
+
18
+ ```bash
19
+ pip install poetry
20
+ ```
21
+
22
+ - A .env file with a OPENAI_API_KEY is required to run the workflow. Please copy .env.sample to .env and modified the parameters. Following is an example to use deekseek LLM as a translation engine.
23
+ OPENAI_API_KEY="sk-xxxx"
24
+ OPENAI_BASE_URL="https://api.deepseek.com"
25
+ OPENAI_MODEL = "deepseek-chat"
26
+
27
+ - Clone the repo and install the dependencies:
28
+ ```bash
29
+ git clone https://github.com/andrewyng/translation-agent.git
30
+ cd translation-agent
31
+ poetry install
32
+ poetry shell # activates virtual environment
33
+ ```
34
+ ### Usage:
35
+ python gr_app.py
36
+ The web server will be running on local URL: http://127.0.0.1:7860 by default.
37
+
38
+ you can also use CLI to
39
+ ```python
40
+ import translation_agent as ta
41
+ source_lang, target_lang, country = "English", "Spanish", "Mexico"
42
+ translation = ta.translate(source_lang, target_lang, source_text, country)
43
+ ```
44
+ See examples/example_script.py for an example script to try out.
45
+
46
+ ## License
47
+
48
+ Translation Agent UI is released under the **MIT License**. You are free to use, modify, and distribute the code
49
+ for both commercial and non-commercial purposes.
50
+
51
+
examples/README.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Examples
2
+
3
+ This directory contains example scripts demonstrating the usage of the `translation-agent` workflow.
4
+
5
+ ## Contents
6
+ - `example_script.py`: A simple script showing how to perform machine translation using the package.
7
+ - `sample-texts/`: A directory containing a few sample texts from The Batch letters written by Andrew and Data Points summaries found on the [DeepLearning.ai website](https://www.deeplearning.ai/the-batch/tag/data-points/).
8
+
9
+ ## Usage
10
+ To run the example scripts, ensure that you have installed the `translation-agent` package and have activated your virtual environment. Then run:
11
+
12
+ ```python
13
+ python example_script.py
14
+ ```
15
+
16
+ If you have any questions or encounter any issues, please feel free to open an issue on the Github repository.
examples/demo-en_zh.png ADDED
examples/demo-zh_en.png ADDED
examples/example_script.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import translation_agent as ta
4
+
5
+
6
+ if __name__ == "__main__":
7
+ source_lang, target_lang, country = "English", "Spanish", "Mexico"
8
+
9
+ relative_path = "sample-texts/sample-short1.txt"
10
+ script_dir = os.path.dirname(os.path.abspath(__file__))
11
+
12
+ full_path = os.path.join(script_dir, relative_path)
13
+
14
+ with open(full_path, encoding="utf-8") as file:
15
+ source_text = file.read()
16
+
17
+ print(f"Source text:\n\n{source_text}\n------------\n")
18
+
19
+ translation = ta.translate(
20
+ source_lang=source_lang,
21
+ target_lang=target_lang,
22
+ source_text=source_text,
23
+ country=country,
24
+ )
25
+
26
+ print(f"Translation:\n\n{translation}")
examples/sample-texts/data_points_samples.json ADDED
@@ -0,0 +1 @@
 
 
1
+ [{"text": "Paid ChatGPT users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts using natural language, and customize charts for presentations. When users upload or import a data file, ChatGPT can now write and execute Python code to analyze or visualize that data on users\u2019 behalf. These features may make it easier for those with limited coding skills to conduct in-depth analyses and let experts save time on routine data tasks."}, {"text": "Reddit\u2019s vast forums will be used to power ChatGPT and other AI products. The collaboration will give Reddit new AI-powered features for its users and moderators, while OpenAI will advertise on Reddit. (Full terms were undisclosed.) OpenAI now has deals with global newspapers, software forums, and a wide variety of other publishers, giving it special access to timely and high-quality training material."}, {"text": "ZeroGPU is accessible through Hugging Face\u2019s Spaces platform, which already hosts over 300,000 AI demos. The shared Nvidia A100s can be used concurrently by multiple users or applications; unutilized capacity will be made available to others. HuggingFace\u2019s goal is to counter tech giants and closed models\u2019 centralization by making state-of-the-art AI technologies more accessible."}, {"text": "Chameleon can natively process both text and images together, allowing it to perform a wide range of mixed-modal tasks with impressive results. Meta\u2019s researchers say the key is Chameleon\u2019s fully token-based architecture (representing images as well as texts as tokens) and training on datasets that combine text with images. Chameleon outperforms many leading and specialized models (including GPT-4V and Gemini Pro) when answering questions about images, describing pictures, writing relevant text, and creating images from text prompts.\u00a0"}, {"text": "Google\u2019s AI-assisted, browser-based integrated development environment (IDE) offers now-familiar features like code completion, debugging tools, and a chat-assisted sidebar, all powered by Gemini. Whenever IDX modifies snippets or suggests new code, it also links back to the original source and its associated license, ensuring proper attribution. Although Google is entering a competitive market, IDX aims to attract developers by showcasing Gemini\u2019s AI advancements and integrating with the company\u2019s cloud services."}, {"text": "The tool aims to solve new users\u2019 \u201cblank page problem\u201d by providing a starting point for testing and iteration, incorporating best practices like chain of thought and separating data from instructions. Users can access the prompt generator directly on the Console or analyze the underlying prompt and architecture using a Google Colab notebook. The generator addresses a common challenge for AI users: efficiently crafting effective (and often larger and more complex) prompts that yield high-quality results."}, {"text": "ElevenLabs Reader: AI Audio is the billion-dollar AI voice cloning startup\u2019s first consumer app. The free app can read web pages, PDFs, and other documents aloud using a selection of 11 AI-generated voices. The app marks ElevenLabs\u2019 expansion into the broader AI voice market beyond its current focus on entertainment and media production."}, {"text": "Microsoft reportedly asked hundreds of its China-based employees working on cloud computing and AI to consider relocating to other countries. One source said Microsoft offered 700 to 800 Chinese engineers the opportunity to transfer to the U.S., Ireland, Australia, or New Zealand. The move comes as the U.S. government tightens restrictions on China\u2019s access to advanced technology, citing concerns over potential military applications and cybersecurity threats."}, {"text": "Abu Dhabi\u2019s Technology Innovation Institute released Falcon 2, a family of large language models that includes Falcon 2 11B and Falcon 2 11B VLM. The latter is the institute\u2019s first multimodal model, capable of converting visual inputs into textual outputs. Both models are Apache 2.0 open-source, multilingual, and perform on par with Gemma 7B and better than Llama 3 8B according to benchmarks and HuggingFace leaderboards."}]
examples/sample-texts/sample-long1.txt ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Last week, I spoke about AI and regulation at the U.S. Capitol at an event that was attended by legislative and business leaders. I’m encouraged by the progress the open source community has made fending off regulations that would have stifled innovation. But opponents of open source are continuing to shift their arguments, with the latest worries centering on open source's impact on national security. I hope we’ll all keep protecting open source!
2
+
3
+ Based on my conversations with legislators, I’m encouraged by the progress the U.S. federal government has made getting a realistic grasp of AI’s risks. To be clear, guardrails are needed. But they should be applied to AI applications, not to general-purpose AI technology.
4
+
5
+ Nonetheless, as I wrote previously, some companies are eager to limit open source, possibly to protect the value of massive investments they’ve made in proprietary models and to deter competitors. It has been fascinating to watch their arguments change over time.
6
+
7
+ For instance, about 12 months ago, the Center For AI Safety’s “Statement on AI Risk” warned that AI could cause human extinction and stoked fears of AI taking over. This alarmed leaders in Washington. But many people in AI pointed out that this dystopian science-fiction scenario has little basis in reality. About six months later, when I testified at the U.S. Senate’s AI Insight forum, legislators no longer worried much about an AI takeover.
8
+
9
+ Then the opponents of open source shifted gears. Their leading argument shifted to the risk of AI helping to create bioweapons. Soon afterward, OpenAI and RAND showed that current AI does not significantly increase the ability of malefactors to build bioweapons. This fear of AI-enabled bioweapons has diminished. To be sure, the possibility that bad actors could use bioweapons — with or without AI — remains a topic of great international concern.
10
+
11
+
12
+ The latest argument for blocking open source AI has shifted to national security. AI is useful for both economic competition and warfare, and open source opponents say the U.S. should make sure its adversaries don’t have access to the latest foundation models. While I don’t want authoritarian governments to use AI, particularly to wage unjust wars, the LLM cat is out of the bag, and authoritarian countries will fill the vacuum if democratic nations limit access. When, some day, a child somewhere asks an AI system questions about democracy, the role of a free press, or the function of an independent judiciary in preserving the rule of law, I would like the AI to reflect democratic values rather than favor authoritarian leaders’ goals over, say, human rights.
13
+
14
+ I came away from Washington optimistic about the progress we’ve made. A year ago, legislators seemed to me to spend 80% of their time talking about guardrails for AI and 20% about investing in innovation. I was delighted that the ratio has flipped, and there was far more talk of investing in innovation.
15
+
16
+ Looking beyond the U.S. federal government, there are many jurisdictions globally. Unfortunately, arguments in favor of regulations that would stifle AI development continue to proliferate. But I’ve learned from my trips to Washington and other nations’ capitals that talking to regulators does have an impact. If you get a chance to talk to a regulator at any level, I hope you’ll do what you can to help governments better understand AI.
examples/sample-texts/sample-short1.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Last week, I spoke about AI and regulation at the U.S. Capitol at an event that was attended by legislative and business leaders. I’m encouraged by the progress the open source community has made fending off regulations that would have stifled innovation. But opponents of open source are continuing to shift their arguments, with the latest worries centering on open source's impact on national security. I hope we’ll all keep protecting open source!
2
+
3
+ Based on my conversations with legislators, I’m encouraged by the progress the U.S. federal government has made getting a realistic grasp of AI’s risks. To be clear, guardrails are needed. But they should be applied to AI applications, not to general-purpose AI technology.
gr_app.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import re
3
+ from difflib import Differ
4
+ from src.translation_agent.utils import *
5
+
6
+ LANGUAGES = {
7
+ 'English': 'English',
8
+ 'Español': 'Spanish',
9
+ 'Français': 'French',
10
+ 'Deutsch': 'German',
11
+ 'Italiano': 'Italian',
12
+ 'Português': 'Portuguese',
13
+ 'Русский': 'Russian',
14
+ '中文': 'Chinese',
15
+ '日本語': 'Japanese',
16
+ '한국어': 'Korean',
17
+ 'العربية': 'Arabic',
18
+ 'हिन्दी': 'Hindi',
19
+ }
20
+
21
+
22
+ def diff_texts(text1, text2, lang):
23
+ d = Differ()
24
+ ic(lang)
25
+ if lang == '中文':
26
+ return [
27
+ (token[2:], token[0] if token[0] != " " else None)
28
+ for token in d.compare(text1, text2)
29
+ if token[0] in ["+", " "]
30
+ ]
31
+ else:
32
+ words1 = re.findall(r'\S+|\s+', text1)
33
+ words2 = re.findall(r'\S+|\s+', text2)
34
+
35
+ return [
36
+ (token[2:], token[0] if token[0] != " " else None)
37
+ for token in d.compare(words1, words2)
38
+ if token[0] in ["+", " "]
39
+ ]
40
+
41
+
42
+ def translate_text(source_lang, target_lang, source_text, country, max_tokens=MAX_TOKENS_PER_CHUNK):
43
+ num_tokens_in_text = num_tokens_in_string(source_text)
44
+
45
+ ic(num_tokens_in_text)
46
+
47
+ if num_tokens_in_text < max_tokens:
48
+ ic("Translating text as single chunk")
49
+
50
+ #Note: use yield from B() if put yield in function B()
51
+ translation_1 = one_chunk_initial_translation(
52
+ source_lang, target_lang, source_text
53
+ )
54
+ yield translation_1, None, None
55
+
56
+ reflection = one_chunk_reflect_on_translation(
57
+ source_lang, target_lang, source_text, translation_1, country
58
+ )
59
+ yield translation_1, reflection, None
60
+
61
+ translation_2 = one_chunk_improve_translation(
62
+ source_lang, target_lang, source_text, translation_1, reflection
63
+ )
64
+ translation_diff = diff_texts(translation_1, translation_2, target_lang)
65
+ yield translation_1, reflection, translation_diff
66
+
67
+ else:
68
+ ic("Translating text as multiple chunks")
69
+
70
+ token_size = calculate_chunk_size(
71
+ token_count=num_tokens_in_text, token_limit=max_tokens
72
+ )
73
+
74
+ ic(token_size)
75
+
76
+ text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
77
+ model_name = "gpt-4",
78
+ chunk_size=token_size,
79
+ chunk_overlap=0,
80
+ )
81
+
82
+ source_text_chunks = text_splitter.split_text(source_text)
83
+
84
+ translation_1_chunks = multichunk_initial_translation(
85
+ source_lang, target_lang, source_text_chunks
86
+ )
87
+ ic(translation_1_chunks)
88
+ translation_1 = "".join(translation_1_chunks)
89
+ yield translation_1, None, None
90
+
91
+ reflection_chunks = multichunk_reflect_on_translation(
92
+ source_lang,
93
+ target_lang,
94
+ source_text_chunks,
95
+ translation_1_chunks,
96
+ country,
97
+ )
98
+ ic(reflection_chunks)
99
+ reflection = "".join(reflection_chunks)
100
+ yield translation_1, reflection, None
101
+
102
+ translation_2_chunks = multichunk_improve_translation(
103
+ source_lang,
104
+ target_lang,
105
+ source_text_chunks,
106
+ translation_1_chunks,
107
+ reflection_chunks,
108
+ )
109
+ ic(translation_2_chunks)
110
+ translation_2 = "".join(translation_2_chunks)
111
+ translation_diff = diff_texts(translation_1, translation_2, target_lang)
112
+
113
+ yield translation_1, reflection, translation_diff
114
+
115
+
116
+
117
+ def update_ui(translation_1, reflection, translation_diff):
118
+ return gr.update(value=translation_1), gr.update(value=reflection), gr.update(value=translation_diff)
119
+
120
+ with gr.Blocks() as demo:
121
+ gr.Markdown("# Andrew Ng's Translation Agent ")
122
+ with gr.Row():
123
+ source_lang = gr.Dropdown(choices=list(LANGUAGES.keys()), value='English', label="Source Language")
124
+ target_lang = gr.Dropdown(choices=list(LANGUAGES.keys()), value='中文', label="Target Language")
125
+ country = gr.Textbox(label="Country (for target language)")
126
+ source_text = gr.Textbox(label="Source Text", lines=5, show_copy_button=True)
127
+
128
+ btn = gr.Button("Translate")
129
+
130
+ with gr.Row():
131
+ translation_1 = gr.Textbox(label="Initial Translation", lines=3)
132
+ reflection = gr.Textbox(label="Reflection", lines=3)
133
+
134
+ translation_diff = gr.HighlightedText (label="Final Translation",
135
+ combine_adjacent=True,
136
+ show_legend=True,
137
+ color_map={"+": "red"})
138
+ #translation = gr.Textbox(label="Final Translation", lines=5, show_copy_button=True)
139
+
140
+ btn.click(translate_text, inputs=[source_lang, target_lang, source_text, country], outputs=[translation_1, reflection, translation_diff], queue=True)
141
+ btn.click(update_ui, inputs=[translation_1, reflection, translation_diff], outputs=[translation_1, reflection, translation_diff], queue=True)
142
+
143
+ demo.launch()
pyproject.toml ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [tool.poetry]
2
+ name = "translation-agent"
3
+ version = "0.1.0"
4
+ description = "Agentic workflow for machine translation using LLMs"
5
+ authors = ["Andrew Ng <[email protected]>"]
6
+ license = "MIT"
7
+ readme = "README.md"
8
+ package-mode = true
9
+ packages = [{ include = "translation_agent", from = "src" }]
10
+ repository = "https://github.com/andrewyng/translation-agent"
11
+ keywords = ["translation", "agents", "LLM", "machine translation"]
12
+
13
+
14
+ [tool.poetry.dependencies]
15
+ python = "^3.9"
16
+ openai = "^1.28.1"
17
+ tiktoken = "^0.6.0"
18
+ joblib = "^1.4.2"
19
+ pysrt = "^1.1.2"
20
+ icecream = "^2.1.3"
21
+ langchain-text-splitters = "^0.0.1"
22
+ python-dotenv = "^1.0.1"
23
+ gradio = "^4.36.1"
24
+
25
+ [tool.poetry.group.dev]
26
+ optional = true
27
+
28
+ [tool.poetry.group.dev.dependencies]
29
+ black = "^24.4.2"
30
+ flake8 = "^7.0.0"
31
+ pyright = "^1.1.362"
32
+ pre-commit = "^3.7.1"
33
+ ruff = "^0.4.4"
34
+
35
+ [tool.poetry.group.test]
36
+ optional = true
37
+
38
+ [tool.poetry.group.test.dependencies]
39
+ pytest = "^8.2.0"
40
+ mypy = "^1.10.0"
41
+ pytest-mock = "^3.14.0"
42
+
43
+ [tool.poetry.group.eval]
44
+ optional = true
45
+
46
+ [tool.poetry.group.eval.dependencies]
47
+ nltk = "^3.8.1"
48
+ sacrebleu = "^2.4.2"
49
+ google-cloud-translate = "^3.15.3"
50
+ deepl = "^1.18.0"
51
+ numpy = "^1.26.4"
52
+ scipy = "^1.13.0"
53
+ gradio = "^4.31.5"
54
+ requests = "^2.32.3"
55
+ beautifulsoup4 = "^4.12.3"
56
+ sentencepiece = "^0.2.0"
57
+
58
+
59
+ [[tool.poetry.source]]
60
+ name = "pytorch"
61
+ url = "https://download.pytorch.org/whl/nightly/rocm6.0"
62
+ priority = "supplemental"
63
+
64
+ [tool.ruff]
65
+ # Set the maximum line length to 79.
66
+ line-length = 79
67
+ indent-width = 4
68
+ exclude = [".venv", ".env", ".git", "tests", "eval"]
69
+
70
+ [tool.ruff.lint]
71
+ # Add the `line-too-long` rule to the enforced rule set. By default, Ruff omits rules that
72
+ # overlap with the use of a formatter, like Black, but we can override this behavior by
73
+ # explicitly adding the rule.
74
+ extend-select = [
75
+ "B", # flake8-bugbear
76
+ "C4", # flake8-comprehensions
77
+ "ERA", # flake8-eradicate/eradicate
78
+ "I", # isort
79
+ "N", # pep8-naming
80
+ "PIE", # flake8-pie
81
+ "PGH", # pygrep
82
+ "RUF", # ruff checks
83
+ "SIM", # flake8-simplify
84
+ # "T20", # flake8-print
85
+ "TCH", # flake8-type-checking
86
+ "TID", # flake8-tidy-imports
87
+ "UP", # pyupgrade
88
+ ]
89
+ fixable = ["ALL"]
90
+ ignore = ["SIM117"]
91
+
92
+ [tool.ruff.lint.isort]
93
+ force-single-line = true
94
+ lines-after-imports = 2
95
+ known-first-party = ["translation-agent"]
96
+
97
+ [tool.ruff.lint.per-file-ignores]
98
+ "**/__init__.py" = ["E402", "F401"]
99
+ "**/{tests,docs,tools}/*" = ["E402"]
100
+
101
+
102
+ [tool.mypy]
103
+ files = "src, tests"
104
+ mypy_path = "src"
105
+ namespace_packages = true
106
+ explicit_package_bases = true
107
+ show_error_codes = true
108
+ strict = true
109
+ enable_error_code = ["ignore-without-code", "redundant-expr", "truthy-bool"]
110
+ exclude = ["tests"]
111
+
112
+ [tool.ruff.format]
113
+ quote-style = "double"
114
+ indent-style = "space"
115
+ skip-magic-trailing-comma = false
116
+ line-ending = "auto"
117
+
118
+
119
+ [build-system]
120
+ requires = ["poetry-core"]
121
+ build-backend = "poetry.core.masonry.api"
requirements.txt ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ aiofiles==23.2.1 ; python_version >= "3.9" and python_version < "4.0"
2
+ altair==5.3.0 ; python_version >= "3.9" and python_version < "4.0"
3
+ annotated-types==0.7.0 ; python_version >= "3.9" and python_version < "4.0"
4
+ anyio==4.4.0 ; python_version >= "3.9" and python_version < "4.0"
5
+ asttokens==2.4.1 ; python_version >= "3.9" and python_version < "4.0"
6
+ attrs==23.2.0 ; python_version >= "3.9" and python_version < "4.0"
7
+ certifi==2024.6.2 ; python_version >= "3.9" and python_version < "4.0"
8
+ chardet==5.2.0 ; python_version >= "3.9" and python_version < "4.0"
9
+ charset-normalizer==3.3.2 ; python_version >= "3.9" and python_version < "4.0"
10
+ click==8.1.7 ; python_version >= "3.9" and python_version < "4.0"
11
+ colorama==0.4.6 ; python_version >= "3.9" and python_version < "4.0"
12
+ contourpy==1.2.1 ; python_version >= "3.9" and python_version < "4.0"
13
+ cycler==0.12.1 ; python_version >= "3.9" and python_version < "4.0"
14
+ distro==1.9.0 ; python_version >= "3.9" and python_version < "4.0"
15
+ dnspython==2.6.1 ; python_version >= "3.9" and python_version < "4.0"
16
+ email-validator==2.1.1 ; python_version >= "3.9" and python_version < "4.0"
17
+ exceptiongroup==1.2.1 ; python_version >= "3.9" and python_version < "3.11"
18
+ executing==2.0.1 ; python_version >= "3.9" and python_version < "4.0"
19
+ fastapi-cli==0.0.4 ; python_version >= "3.9" and python_version < "4.0"
20
+ fastapi==0.111.0 ; python_version >= "3.9" and python_version < "4.0"
21
+ ffmpy==0.3.2 ; python_version >= "3.9" and python_version < "4.0"
22
+ filelock==3.15.1 ; python_version >= "3.9" and python_version < "4.0"
23
+ fonttools==4.53.0 ; python_version >= "3.9" and python_version < "4.0"
24
+ fsspec==2024.6.0 ; python_version >= "3.9" and python_version < "4.0"
25
+ gradio-client==1.0.1 ; python_version >= "3.9" and python_version < "4.0"
26
+ gradio==4.36.1 ; python_version >= "3.9" and python_version < "4.0"
27
+ h11==0.14.0 ; python_version >= "3.9" and python_version < "4.0"
28
+ httpcore==1.0.5 ; python_version >= "3.9" and python_version < "4.0"
29
+ httptools==0.6.1 ; python_version >= "3.9" and python_version < "4.0"
30
+ httpx==0.27.0 ; python_version >= "3.9" and python_version < "4.0"
31
+ huggingface-hub==0.23.4 ; python_version >= "3.9" and python_version < "4.0"
32
+ icecream==2.1.3 ; python_version >= "3.9" and python_version < "4.0"
33
+ idna==3.7 ; python_version >= "3.9" and python_version < "4.0"
34
+ importlib-resources==6.4.0 ; python_version >= "3.9" and python_version < "4.0"
35
+ jinja2==3.1.4 ; python_version >= "3.9" and python_version < "4.0"
36
+ joblib==1.4.2 ; python_version >= "3.9" and python_version < "4.0"
37
+ jsonpatch==1.33 ; python_version >= "3.9" and python_version < "4.0"
38
+ jsonpointer==3.0.0 ; python_version >= "3.9" and python_version < "4.0"
39
+ jsonschema-specifications==2023.12.1 ; python_version >= "3.9" and python_version < "4.0"
40
+ jsonschema==4.22.0 ; python_version >= "3.9" and python_version < "4.0"
41
+ kiwisolver==1.4.5 ; python_version >= "3.9" and python_version < "4.0"
42
+ langchain-core==0.1.52 ; python_version >= "3.9" and python_version < "4.0"
43
+ langchain-text-splitters==0.0.1 ; python_version >= "3.9" and python_version < "4.0"
44
+ langsmith==0.1.77 ; python_version >= "3.9" and python_version < "4.0"
45
+ markdown-it-py==3.0.0 ; python_version >= "3.9" and python_version < "4.0"
46
+ markupsafe==2.1.5 ; python_version >= "3.9" and python_version < "4.0"
47
+ matplotlib==3.9.0 ; python_version >= "3.9" and python_version < "4.0"
48
+ mdurl==0.1.2 ; python_version >= "3.9" and python_version < "4.0"
49
+ numpy==1.26.4 ; python_version >= "3.9" and python_version < "4.0"
50
+ openai==1.34.0 ; python_version >= "3.9" and python_version < "4.0"
51
+ orjson==3.10.5 ; python_version >= "3.9" and python_version < "4.0"
52
+ packaging==23.2 ; python_version >= "3.9" and python_version < "4.0"
53
+ pandas==2.2.2 ; python_version >= "3.9" and python_version < "4.0"
54
+ pillow==10.3.0 ; python_version >= "3.9" and python_version < "4.0"
55
+ pydantic-core==2.18.4 ; python_version >= "3.9" and python_version < "4.0"
56
+ pydantic==2.7.4 ; python_version >= "3.9" and python_version < "4.0"
57
+ pydub==0.25.1 ; python_version >= "3.9" and python_version < "4.0"
58
+ pygments==2.18.0 ; python_version >= "3.9" and python_version < "4.0"
59
+ pyparsing==3.1.2 ; python_version >= "3.9" and python_version < "4.0"
60
+ pysrt==1.1.2 ; python_version >= "3.9" and python_version < "4.0"
61
+ python-dateutil==2.9.0.post0 ; python_version >= "3.9" and python_version < "4.0"
62
+ python-dotenv==1.0.1 ; python_version >= "3.9" and python_version < "4.0"
63
+ python-multipart==0.0.9 ; python_version >= "3.9" and python_version < "4.0"
64
+ pytz==2024.1 ; python_version >= "3.9" and python_version < "4.0"
65
+ pyyaml==6.0.1 ; python_version >= "3.9" and python_version < "4.0"
66
+ referencing==0.35.1 ; python_version >= "3.9" and python_version < "4.0"
67
+ regex==2024.5.15 ; python_version >= "3.9" and python_version < "4.0"
68
+ requests==2.32.3 ; python_version >= "3.9" and python_version < "4.0"
69
+ rich==13.7.1 ; python_version >= "3.9" and python_version < "4.0"
70
+ rpds-py==0.18.1 ; python_version >= "3.9" and python_version < "4.0"
71
+ ruff==0.4.9 ; python_version >= "3.9" and python_version < "4.0" and sys_platform != "emscripten"
72
+ semantic-version==2.10.0 ; python_version >= "3.9" and python_version < "4.0"
73
+ shellingham==1.5.4 ; python_version >= "3.9" and python_version < "4.0"
74
+ six==1.16.0 ; python_version >= "3.9" and python_version < "4.0"
75
+ sniffio==1.3.1 ; python_version >= "3.9" and python_version < "4.0"
76
+ starlette==0.37.2 ; python_version >= "3.9" and python_version < "4.0"
77
+ tenacity==8.3.0 ; python_version >= "3.9" and python_version < "4.0"
78
+ tiktoken==0.6.0 ; python_version >= "3.9" and python_version < "4.0"
79
+ tomlkit==0.12.0 ; python_version >= "3.9" and python_version < "4.0"
80
+ toolz==0.12.1 ; python_version >= "3.9" and python_version < "4.0"
81
+ tqdm==4.66.4 ; python_version >= "3.9" and python_version < "4.0"
82
+ typer==0.12.3 ; python_version >= "3.9" and python_version < "4.0"
83
+ typing-extensions==4.12.2 ; python_version >= "3.9" and python_version < "4.0"
84
+ tzdata==2024.1 ; python_version >= "3.9" and python_version < "4.0"
85
+ ujson==5.10.0 ; python_version >= "3.9" and python_version < "4.0"
86
+ urllib3==2.2.1 ; python_version >= "3.9" and python_version < "4.0"
87
+ uvicorn==0.30.1 ; python_version >= "3.9" and python_version < "4.0" and sys_platform != "emscripten"
88
+ uvicorn[standard]==0.30.1 ; python_version >= "3.9" and python_version < "4.0"
89
+ uvloop==0.19.0 ; (sys_platform != "win32" and sys_platform != "cygwin") and platform_python_implementation != "PyPy" and python_version >= "3.9" and python_version < "4.0"
90
+ watchfiles==0.22.0 ; python_version >= "3.9" and python_version < "4.0"
91
+ websockets==11.0.3 ; python_version >= "3.9" and python_version < "4.0"
92
+ zipp==3.19.2 ; python_version >= "3.9" and python_version < "3.10"
src/translation_agent/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .utils import translate
src/translation_agent/utils.py ADDED
@@ -0,0 +1,689 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import List
3
+ from typing import Union
4
+
5
+ import openai
6
+ import tiktoken
7
+ from dotenv import load_dotenv
8
+ from icecream import ic
9
+ from langchain_text_splitters import RecursiveCharacterTextSplitter
10
+
11
+
12
+ load_dotenv() # read local .env file
13
+ model = os.getenv("OPENAI_MODEL") or "gpt-4-turbo"
14
+ client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"), base_url=os.getenv("OPENAI_BASE_URL"))
15
+
16
+
17
+ MAX_TOKENS_PER_CHUNK = (
18
+ 1000 # if text is more than this many tokens, we'll break it up into
19
+ )
20
+ # discrete chunks to translate one chunk at a time
21
+
22
+
23
+ def get_completion(
24
+ prompt: str,
25
+ system_message: str = "You are a helpful assistant.",
26
+ model: str = model,
27
+ temperature: float = 0.3,
28
+ json_mode: bool = False,
29
+ ) -> Union[str, dict]:
30
+ """
31
+ Generate a completion using the OpenAI API.
32
+
33
+ Args:
34
+ prompt (str): The user's prompt or query.
35
+ system_message (str, optional): The system message to set the context for the assistant.
36
+ Defaults to "You are a helpful assistant.".
37
+ model (str, optional): The name of the OpenAI model to use for generating the completion.
38
+ Defaults to "gpt-4-turbo".
39
+ temperature (float, optional): The sampling temperature for controlling the randomness of the generated text.
40
+ Defaults to 0.3.
41
+ json_mode (bool, optional): Whether to return the response in JSON format.
42
+ Defaults to False.
43
+
44
+ Returns:
45
+ Union[str, dict]: The generated completion.
46
+ If json_mode is True, returns the complete API response as a dictionary.
47
+ If json_mode is False, returns the generated text as a string.
48
+ """
49
+
50
+ if json_mode:
51
+ response = client.chat.completions.create(
52
+ model=model,
53
+ temperature=temperature,
54
+ top_p=1,
55
+ response_format={"type": "json_object"},
56
+ messages=[
57
+ {"role": "system", "content": system_message},
58
+ {"role": "user", "content": prompt},
59
+ ],
60
+ )
61
+ return response.choices[0].message.content
62
+ else:
63
+ response = client.chat.completions.create(
64
+ model=model,
65
+ temperature=temperature,
66
+ top_p=1,
67
+ messages=[
68
+ {"role": "system", "content": system_message},
69
+ {"role": "user", "content": prompt},
70
+ ],
71
+ )
72
+ return response.choices[0].message.content
73
+
74
+
75
+ def one_chunk_initial_translation(
76
+ source_lang: str, target_lang: str, source_text: str
77
+ ) -> str:
78
+ """
79
+ Translate the entire text as one chunk using an LLM.
80
+
81
+ Args:
82
+ source_lang (str): The source language of the text.
83
+ target_lang (str): The target language for translation.
84
+ source_text (str): The text to be translated.
85
+
86
+ Returns:
87
+ str: The translated text.
88
+ """
89
+
90
+ system_message = f"You are an expert linguist, specializing in translation from {source_lang} to {target_lang}."
91
+
92
+ translation_prompt = f"""This is an {source_lang} to {target_lang} translation, please provide the {target_lang} translation for this text. \
93
+ Do not provide any explanations or text apart from the translation.
94
+ {source_lang}: {source_text}
95
+
96
+ {target_lang}:"""
97
+
98
+ prompt = translation_prompt.format(source_text=source_text)
99
+
100
+ translation = get_completion(prompt, system_message=system_message)
101
+
102
+ return translation
103
+
104
+
105
+ def one_chunk_reflect_on_translation(
106
+ source_lang: str,
107
+ target_lang: str,
108
+ source_text: str,
109
+ translation_1: str,
110
+ country: str = "",
111
+ ) -> str:
112
+ """
113
+ Use an LLM to reflect on the translation, treating the entire text as one chunk.
114
+
115
+ Args:
116
+ source_lang (str): The source language of the text.
117
+ target_lang (str): The target language of the translation.
118
+ source_text (str): The original text in the source language.
119
+ translation_1 (str): The initial translation of the source text.
120
+ country (str): Country specified for target language.
121
+
122
+ Returns:
123
+ str: The LLM's reflection on the translation, providing constructive criticism and suggestions for improvement.
124
+ """
125
+
126
+ system_message = f"You are an expert linguist specializing in translation from {source_lang} to {target_lang}. \
127
+ You will be provided with a source text and its translation and your goal is to improve the translation."
128
+
129
+ if country != "":
130
+ reflection_prompt = f"""Your task is to carefully read a source text and a translation from {source_lang} to {target_lang}, and then give constructive criticism and helpful suggestions to improve the translation. \
131
+ The final style and tone of the translation should match the style of {target_lang} colloquially spoken in {country}.
132
+
133
+ The source text and initial translation, delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT> and <TRANSLATION></TRANSLATION>, are as follows:
134
+
135
+ <SOURCE_TEXT>
136
+ {source_text}
137
+ </SOURCE_TEXT>
138
+
139
+ <TRANSLATION>
140
+ {translation_1}
141
+ </TRANSLATION>
142
+
143
+ When writing suggestions, pay attention to whether there are ways to improve the translation's \n\
144
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),\n\
145
+ (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules, and ensuring there are no unnecessary repetitions),\n\
146
+ (iii) style (by ensuring the translations reflect the style of the source text and takes into account any cultural context),\n\
147
+ (iv) terminology (by ensuring terminology use is consistent and reflects the source text domain; and by only ensuring you use equivalent idioms {target_lang}).\n\
148
+
149
+ Write a list of specific, helpful and constructive suggestions for improving the translation.
150
+ Each suggestion should address one specific part of the translation.
151
+ Output only the suggestions and nothing else."""
152
+
153
+ else:
154
+ reflection_prompt = f"""Your task is to carefully read a source text and a translation from {source_lang} to {target_lang}, and then give constructive criticism and helpful suggestions to improve the translation. \
155
+
156
+ The source text and initial translation, delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT> and <TRANSLATION></TRANSLATION>, are as follows:
157
+
158
+ <SOURCE_TEXT>
159
+ {source_text}
160
+ </SOURCE_TEXT>
161
+
162
+ <TRANSLATION>
163
+ {translation_1}
164
+ </TRANSLATION>
165
+
166
+ When writing suggestions, pay attention to whether there are ways to improve the translation's \n\
167
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),\n\
168
+ (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules, and ensuring there are no unnecessary repetitions),\n\
169
+ (iii) style (by ensuring the translations reflect the style of the source text and takes into account any cultural context),\n\
170
+ (iv) terminology (by ensuring terminology use is consistent and reflects the source text domain; and by only ensuring you use equivalent idioms {target_lang}).\n\
171
+
172
+ Write a list of specific, helpful and constructive suggestions for improving the translation.
173
+ Each suggestion should address one specific part of the translation.
174
+ Output only the suggestions and nothing else."""
175
+
176
+ prompt = reflection_prompt.format(
177
+ source_lang=source_lang,
178
+ target_lang=target_lang,
179
+ source_text=source_text,
180
+ translation_1=translation_1,
181
+ )
182
+ reflection = get_completion(prompt, system_message=system_message)
183
+ return reflection
184
+
185
+
186
+ def one_chunk_improve_translation(
187
+ source_lang: str,
188
+ target_lang: str,
189
+ source_text: str,
190
+ translation_1: str,
191
+ reflection: str,
192
+ ) -> str:
193
+ """
194
+ Use the reflection to improve the translation, treating the entire text as one chunk.
195
+
196
+ Args:
197
+ source_lang (str): The source language of the text.
198
+ target_lang (str): The target language for the translation.
199
+ source_text (str): The original text in the source language.
200
+ translation_1 (str): The initial translation of the source text.
201
+ reflection (str): Expert suggestions and constructive criticism for improving the translation.
202
+
203
+ Returns:
204
+ str: The improved translation based on the expert suggestions.
205
+ """
206
+
207
+ system_message = f"You are an expert linguist, specializing in translation editing from {source_lang} to {target_lang}."
208
+
209
+ prompt = f"""Your task is to carefully read, then edit, a translation from {source_lang} to {target_lang}, taking into
210
+ account a list of expert suggestions and constructive criticisms.
211
+
212
+ The source text, the initial translation, and the expert linguist suggestions are delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT>, <TRANSLATION></TRANSLATION> and <EXPERT_SUGGESTIONS></EXPERT_SUGGESTIONS> \
213
+ as follows:
214
+
215
+ <SOURCE_TEXT>
216
+ {source_text}
217
+ </SOURCE_TEXT>
218
+
219
+ <TRANSLATION>
220
+ {translation_1}
221
+ </TRANSLATION>
222
+
223
+ <EXPERT_SUGGESTIONS>
224
+ {reflection}
225
+ </EXPERT_SUGGESTIONS>
226
+
227
+ Please take into account the expert suggestions when editing the translation. Edit the translation by ensuring:
228
+
229
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),
230
+ (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules and ensuring there are no unnecessary repetitions), \
231
+ (iii) style (by ensuring the translations reflect the style of the source text)
232
+ (iv) terminology (inappropriate for context, inconsistent use), or
233
+ (v) other errors.
234
+
235
+ Output only the new translation and nothing else."""
236
+
237
+ translation_2 = get_completion(prompt, system_message)
238
+
239
+ return translation_2
240
+
241
+
242
+ def one_chunk_translate_text(
243
+ source_lang: str, target_lang: str, source_text: str, country: str = ""
244
+ ) -> str:
245
+ """
246
+ Translate a single chunk of text from the source language to the target language.
247
+
248
+ This function performs a two-step translation process:
249
+ 1. Get an initial translation of the source text.
250
+ 2. Reflect on the initial translation and generate an improved translation.
251
+
252
+ Args:
253
+ source_lang (str): The source language of the text.
254
+ target_lang (str): The target language for the translation.
255
+ source_text (str): The text to be translated.
256
+ country (str): Country specified for target language.
257
+ Returns:
258
+ str: The improved translation of the source text.
259
+ """
260
+ translation_1 = one_chunk_initial_translation(
261
+ source_lang, target_lang, source_text
262
+ )
263
+
264
+ reflection = one_chunk_reflect_on_translation(
265
+ source_lang, target_lang, source_text, translation_1, country
266
+ )
267
+ translation_2 = one_chunk_improve_translation(
268
+ source_lang, target_lang, source_text, translation_1, reflection
269
+ )
270
+
271
+ return translation_2
272
+
273
+
274
+ def num_tokens_in_string(
275
+ input_str: str, encoding_name: str = "cl100k_base"
276
+ ) -> int:
277
+ """
278
+ Calculate the number of tokens in a given string using a specified encoding.
279
+
280
+ Args:
281
+ str (str): The input string to be tokenized.
282
+ encoding_name (str, optional): The name of the encoding to use. Defaults to "cl100k_base",
283
+ which is the most commonly used encoder (used by GPT-4).
284
+
285
+ Returns:
286
+ int: The number of tokens in the input string.
287
+
288
+ Example:
289
+ >>> text = "Hello, how are you?"
290
+ >>> num_tokens = num_tokens_in_string(text)
291
+ >>> print(num_tokens)
292
+ 5
293
+ """
294
+ encoding = tiktoken.get_encoding(encoding_name)
295
+ num_tokens = len(encoding.encode(input_str))
296
+ return num_tokens
297
+
298
+
299
+ def multichunk_initial_translation(
300
+ source_lang: str, target_lang: str, source_text_chunks: List[str]
301
+ ) -> List[str]:
302
+ """
303
+ Translate a text in multiple chunks from the source language to the target language.
304
+
305
+ Args:
306
+ source_lang (str): The source language of the text.
307
+ target_lang (str): The target language for translation.
308
+ source_text_chunks (List[str]): A list of text chunks to be translated.
309
+
310
+ Returns:
311
+ List[str]: A list of translated text chunks.
312
+ """
313
+
314
+ system_message = f"You are an expert linguist, specializing in translation from {source_lang} to {target_lang}."
315
+
316
+ translation_prompt = """Your task is provide a professional translation from {source_lang} to {target_lang} of PART of a text.
317
+
318
+ The source text is below, delimited by XML tags <SOURCE_TEXT> and </SOURCE_TEXT>. Translate only the part within the source text
319
+ delimited by <TRANSLATE_THIS> and </TRANSLATE_THIS>. You can use the rest of the source text as context, but do not translate any
320
+ of the other text. Do not output anything other than the translation of the indicated part of the text.
321
+
322
+ <SOURCE_TEXT>
323
+ {tagged_text}
324
+ </SOURCE_TEXT>
325
+
326
+ To reiterate, you should translate only this part of the text, shown here again between <TRANSLATE_THIS> and </TRANSLATE_THIS>:
327
+ <TRANSLATE_THIS>
328
+ {chunk_to_translate}
329
+ </TRANSLATE_THIS>
330
+
331
+ Output only the translation of the portion you are asked to translate, and nothing else.
332
+ """
333
+
334
+ translation_chunks = []
335
+ for i in range(len(source_text_chunks)):
336
+ # Will translate chunk i
337
+ tagged_text = (
338
+ "".join(source_text_chunks[0:i])
339
+ + "<TRANSLATE_THIS>"
340
+ + source_text_chunks[i]
341
+ + "</TRANSLATE_THIS>"
342
+ + "".join(source_text_chunks[i + 1 :])
343
+ )
344
+
345
+ prompt = translation_prompt.format(
346
+ source_lang=source_lang,
347
+ target_lang=target_lang,
348
+ tagged_text=tagged_text,
349
+ chunk_to_translate=source_text_chunks[i],
350
+ )
351
+
352
+ translation = get_completion(prompt, system_message=system_message)
353
+ translation_chunks.append(translation)
354
+
355
+ return translation_chunks
356
+
357
+
358
+ def multichunk_reflect_on_translation(
359
+ source_lang: str,
360
+ target_lang: str,
361
+ source_text_chunks: List[str],
362
+ translation_1_chunks: List[str],
363
+ country: str = "",
364
+ ) -> List[str]:
365
+ """
366
+ Provides constructive criticism and suggestions for improving a partial translation.
367
+
368
+ Args:
369
+ source_lang (str): The source language of the text.
370
+ target_lang (str): The target language of the translation.
371
+ source_text_chunks (List[str]): The source text divided into chunks.
372
+ translation_1_chunks (List[str]): The translated chunks corresponding to the source text chunks.
373
+ country (str): Country specified for target language.
374
+
375
+ Returns:
376
+ List[str]: A list of reflections containing suggestions for improving each translated chunk.
377
+ """
378
+
379
+ system_message = f"You are an expert linguist specializing in translation from {source_lang} to {target_lang}. \
380
+ You will be provided with a source text and its translation and your goal is to improve the translation."
381
+
382
+ if country != "":
383
+ reflection_prompt = """Your task is to carefully read a source text and part of a translation of that text from {source_lang} to {target_lang}, and then give constructive criticism and helpful suggestions for improving the translation.
384
+ The final style and tone of the translation should match the style of {target_lang} colloquially spoken in {country}.
385
+
386
+ The source text is below, delimited by XML tags <SOURCE_TEXT> and </SOURCE_TEXT>, and the part that has been translated
387
+ is delimited by <TRANSLATE_THIS> and </TRANSLATE_THIS> within the source text. You can use the rest of the source text
388
+ as context for critiquing the translated part.
389
+
390
+ <SOURCE_TEXT>
391
+ {tagged_text}
392
+ </SOURCE_TEXT>
393
+
394
+ To reiterate, only part of the text is being translated, shown here again between <TRANSLATE_THIS> and </TRANSLATE_THIS>:
395
+ <TRANSLATE_THIS>
396
+ {chunk_to_translate}
397
+ </TRANSLATE_THIS>
398
+
399
+ The translation of the indicated part, delimited below by <TRANSLATION> and </TRANSLATION>, is as follows:
400
+ <TRANSLATION>
401
+ {translation_1_chunk}
402
+ </TRANSLATION>
403
+
404
+ When writing suggestions, pay attention to whether there are ways to improve the translation's:\n\
405
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),\n\
406
+ (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules, and ensuring there are no unnecessary repetitions),\n\
407
+ (iii) style (by ensuring the translations reflect the style of the source text and takes into account any cultural context),\n\
408
+ (iv) terminology (by ensuring terminology use is consistent and reflects the source text domain; and by only ensuring you use equivalent idioms {target_lang}).\n\
409
+
410
+ Write a list of specific, helpful and constructive suggestions for improving the translation.
411
+ Each suggestion should address one specific part of the translation.
412
+ Output only the suggestions and nothing else."""
413
+
414
+ else:
415
+ reflection_prompt = """Your task is to carefully read a source text and part of a translation of that text from {source_lang} to {target_lang}, and then give constructive criticism and helpful suggestions for improving the translation.
416
+
417
+ The source text is below, delimited by XML tags <SOURCE_TEXT> and </SOURCE_TEXT>, and the part that has been translated
418
+ is delimited by <TRANSLATE_THIS> and </TRANSLATE_THIS> within the source text. You can use the rest of the source text
419
+ as context for critiquing the translated part.
420
+
421
+ <SOURCE_TEXT>
422
+ {tagged_text}
423
+ </SOURCE_TEXT>
424
+
425
+ To reiterate, only part of the text is being translated, shown here again between <TRANSLATE_THIS> and </TRANSLATE_THIS>:
426
+ <TRANSLATE_THIS>
427
+ {chunk_to_translate}
428
+ </TRANSLATE_THIS>
429
+
430
+ The translation of the indicated part, delimited below by <TRANSLATION> and </TRANSLATION>, is as follows:
431
+ <TRANSLATION>
432
+ {translation_1_chunk}
433
+ </TRANSLATION>
434
+
435
+ When writing suggestions, pay attention to whether there are ways to improve the translation's:\n\
436
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),\n\
437
+ (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules, and ensuring there are no unnecessary repetitions),\n\
438
+ (iii) style (by ensuring the translations reflect the style of the source text and takes into account any cultural context),\n\
439
+ (iv) terminology (by ensuring terminology use is consistent and reflects the source text domain; and by only ensuring you use equivalent idioms {target_lang}).\n\
440
+
441
+ Write a list of specific, helpful and constructive suggestions for improving the translation.
442
+ Each suggestion should address one specific part of the translation.
443
+ Output only the suggestions and nothing else."""
444
+
445
+ reflection_chunks = []
446
+ for i in range(len(source_text_chunks)):
447
+ # Will translate chunk i
448
+ tagged_text = (
449
+ "".join(source_text_chunks[0:i])
450
+ + "<TRANSLATE_THIS>"
451
+ + source_text_chunks[i]
452
+ + "</TRANSLATE_THIS>"
453
+ + "".join(source_text_chunks[i + 1 :])
454
+ )
455
+ if country != "":
456
+ prompt = reflection_prompt.format(
457
+ source_lang=source_lang,
458
+ target_lang=target_lang,
459
+ tagged_text=tagged_text,
460
+ chunk_to_translate=source_text_chunks[i],
461
+ translation_1_chunk=translation_1_chunks[i],
462
+ country=country,
463
+ )
464
+ else:
465
+ prompt = reflection_prompt.format(
466
+ source_lang=source_lang,
467
+ target_lang=target_lang,
468
+ tagged_text=tagged_text,
469
+ chunk_to_translate=source_text_chunks[i],
470
+ translation_1_chunk=translation_1_chunks[i],
471
+ )
472
+
473
+ reflection = get_completion(prompt, system_message=system_message)
474
+ reflection_chunks.append(reflection)
475
+
476
+ return reflection_chunks
477
+
478
+
479
+ def multichunk_improve_translation(
480
+ source_lang: str,
481
+ target_lang: str,
482
+ source_text_chunks: List[str],
483
+ translation_1_chunks: List[str],
484
+ reflection_chunks: List[str],
485
+ ) -> List[str]:
486
+ """
487
+ Improves the translation of a text from source language to target language by considering expert suggestions.
488
+
489
+ Args:
490
+ source_lang (str): The source language of the text.
491
+ target_lang (str): The target language for translation.
492
+ source_text_chunks (List[str]): The source text divided into chunks.
493
+ translation_1_chunks (List[str]): The initial translation of each chunk.
494
+ reflection_chunks (List[str]): Expert suggestions for improving each translated chunk.
495
+
496
+ Returns:
497
+ List[str]: The improved translation of each chunk.
498
+ """
499
+
500
+ system_message = f"You are an expert linguist, specializing in translation editing from {source_lang} to {target_lang}."
501
+
502
+ improvement_prompt = """Your task is to carefully read, then improve, a translation from {source_lang} to {target_lang}, taking into
503
+ account a set of expert suggestions and constructive critisms. Below, the source text, initial translation, and expert suggestions are provided.
504
+
505
+ The source text is below, delimited by XML tags <SOURCE_TEXT> and </SOURCE_TEXT>, and the part that has been translated
506
+ is delimited by <TRANSLATE_THIS> and </TRANSLATE_THIS> within the source text. You can use the rest of the source text
507
+ as context, but need to provide a translation only of the part indicated by <TRANSLATE_THIS> and </TRANSLATE_THIS>.
508
+
509
+ <SOURCE_TEXT>
510
+ {tagged_text}
511
+ </SOURCE_TEXT>
512
+
513
+ To reiterate, only part of the text is being translated, shown here again between <TRANSLATE_THIS> and </TRANSLATE_THIS>:
514
+ <TRANSLATE_THIS>
515
+ {chunk_to_translate}
516
+ </TRANSLATE_THIS>
517
+
518
+ The translation of the indicated part, delimited below by <TRANSLATION> and </TRANSLATION>, is as follows:
519
+ <TRANSLATION>
520
+ {translation_1_chunk}
521
+ </TRANSLATION>
522
+
523
+ The expert translations of the indicated part, delimited below by <EXPERT_SUGGESTIONS> and </EXPERT_SUGGESTIONS>, is as follows:
524
+ <EXPERT_SUGGESTIONS>
525
+ {reflection_chunk}
526
+ </EXPERT_SUGGESTIONS>
527
+
528
+ Taking into account the expert suggestions rewrite the translation to improve it, paying attention
529
+ to whether there are ways to improve the translation's
530
+
531
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),
532
+ (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules and ensuring there are no unnecessary repetitions), \
533
+ (iii) style (by ensuring the translations reflect the style of the source text)
534
+ (iv) terminology (inappropriate for context, inconsistent use), or
535
+ (v) other errors.
536
+
537
+ Output only the new translation of the indicated part and nothing else."""
538
+
539
+ translation_2_chunks = []
540
+ for i in range(len(source_text_chunks)):
541
+ # Will translate chunk i
542
+ tagged_text = (
543
+ "".join(source_text_chunks[0:i])
544
+ + "<TRANSLATE_THIS>"
545
+ + source_text_chunks[i]
546
+ + "</TRANSLATE_THIS>"
547
+ + "".join(source_text_chunks[i + 1 :])
548
+ )
549
+
550
+ prompt = improvement_prompt.format(
551
+ source_lang=source_lang,
552
+ target_lang=target_lang,
553
+ tagged_text=tagged_text,
554
+ chunk_to_translate=source_text_chunks[i],
555
+ translation_1_chunk=translation_1_chunks[i],
556
+ reflection_chunk=reflection_chunks[i],
557
+ )
558
+
559
+ translation_2 = get_completion(prompt, system_message=system_message)
560
+ translation_2_chunks.append(translation_2)
561
+
562
+ return translation_2_chunks
563
+
564
+
565
+ def multichunk_translation(
566
+ source_lang, target_lang, source_text_chunks, country: str = ""
567
+ ):
568
+ """
569
+ Improves the translation of multiple text chunks based on the initial translation and reflection.
570
+
571
+ Args:
572
+ source_lang (str): The source language of the text chunks.
573
+ target_lang (str): The target language for translation.
574
+ source_text_chunks (List[str]): The list of source text chunks to be translated.
575
+ translation_1_chunks (List[str]): The list of initial translations for each source text chunk.
576
+ reflection_chunks (List[str]): The list of reflections on the initial translations.
577
+ country (str): Country specified for target language
578
+ Returns:
579
+ List[str]: The list of improved translations for each source text chunk.
580
+ """
581
+
582
+ translation_1_chunks = multichunk_initial_translation(
583
+ source_lang, target_lang, source_text_chunks
584
+ )
585
+
586
+ reflection_chunks = multichunk_reflect_on_translation(
587
+ source_lang,
588
+ target_lang,
589
+ source_text_chunks,
590
+ translation_1_chunks,
591
+ country,
592
+ )
593
+
594
+ translation_2_chunks = multichunk_improve_translation(
595
+ source_lang,
596
+ target_lang,
597
+ source_text_chunks,
598
+ translation_1_chunks,
599
+ reflection_chunks,
600
+ )
601
+
602
+ return translation_2_chunks
603
+
604
+
605
+ def calculate_chunk_size(token_count: int, token_limit: int) -> int:
606
+ """
607
+ Calculate the chunk size based on the token count and token limit.
608
+
609
+ Args:
610
+ token_count (int): The total number of tokens.
611
+ token_limit (int): The maximum number of tokens allowed per chunk.
612
+
613
+ Returns:
614
+ int: The calculated chunk size.
615
+
616
+ Description:
617
+ This function calculates the chunk size based on the given token count and token limit.
618
+ If the token count is less than or equal to the token limit, the function returns the token count as the chunk size.
619
+ Otherwise, it calculates the number of chunks needed to accommodate all the tokens within the token limit.
620
+ The chunk size is determined by dividing the token limit by the number of chunks.
621
+ If there are remaining tokens after dividing the token count by the token limit,
622
+ the chunk size is adjusted by adding the remaining tokens divided by the number of chunks.
623
+
624
+ Example:
625
+ >>> calculate_chunk_size(1000, 500)
626
+ 500
627
+ >>> calculate_chunk_size(1530, 500)
628
+ 389
629
+ >>> calculate_chunk_size(2242, 500)
630
+ 496
631
+ """
632
+
633
+ if token_count <= token_limit:
634
+ return token_count
635
+
636
+ num_chunks = (token_count + token_limit - 1) // token_limit
637
+ chunk_size = token_count // num_chunks
638
+
639
+ remaining_tokens = token_count % token_limit
640
+ if remaining_tokens > 0:
641
+ chunk_size += remaining_tokens // num_chunks
642
+
643
+ return chunk_size
644
+
645
+
646
+ def translate(
647
+ source_lang,
648
+ target_lang,
649
+ source_text,
650
+ country,
651
+ max_tokens=MAX_TOKENS_PER_CHUNK,
652
+ ):
653
+ """Translate the source_text from source_lang to target_lang."""
654
+
655
+ num_tokens_in_text = num_tokens_in_string(source_text)
656
+
657
+ ic(num_tokens_in_text)
658
+
659
+ if num_tokens_in_text < max_tokens:
660
+ ic("Translating text as single chunk")
661
+
662
+ final_translation = one_chunk_translate_text(
663
+ source_lang, target_lang, source_text, country
664
+ )
665
+
666
+ return final_translation
667
+
668
+ else:
669
+ ic("Translating text as multiple chunks")
670
+
671
+ token_size = calculate_chunk_size(
672
+ token_count=num_tokens_in_text, token_limit=max_tokens
673
+ )
674
+
675
+ ic(token_size)
676
+
677
+ text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
678
+ model_name="gpt-4",
679
+ chunk_size=token_size,
680
+ chunk_overlap=0,
681
+ )
682
+
683
+ source_text_chunks = text_splitter.split_text(source_text)
684
+
685
+ translation_2_chunks = multichunk_translation(
686
+ source_lang, target_lang, source_text_chunks, country
687
+ )
688
+
689
+ return "".join(translation_2_chunks)
tests/test_agent.py ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ from unittest.mock import patch
4
+
5
+ import openai
6
+ import pytest
7
+ from dotenv import load_dotenv
8
+
9
+ # from translation_agent.utils import find_sentence_starts
10
+ from translation_agent.utils import get_completion
11
+ from translation_agent.utils import num_tokens_in_string
12
+ from translation_agent.utils import one_chunk_improve_translation
13
+ from translation_agent.utils import one_chunk_initial_translation
14
+ from translation_agent.utils import one_chunk_reflect_on_translation
15
+ from translation_agent.utils import one_chunk_translate_text
16
+
17
+
18
+ load_dotenv()
19
+
20
+ client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
21
+
22
+
23
+ def test_get_completion_json_mode_api_call():
24
+ # Set up the test data
25
+ prompt = "What is the capital of France in json?"
26
+ system_message = "You are a helpful assistant."
27
+ model = "gpt-4-turbo"
28
+ temperature = 0.3
29
+ json_mode = True
30
+
31
+ # Call the function with JSON_mode=True
32
+ result = get_completion(
33
+ prompt, system_message, model, temperature, json_mode
34
+ )
35
+
36
+ # Assert that the result is not None
37
+ assert result is not None
38
+
39
+ # Assert that it can be transformed to dictionary (json)
40
+ assert isinstance(json.loads(result), dict)
41
+
42
+
43
+ def test_get_completion_non_json_mode_api_call():
44
+ # Set up the test data
45
+ prompt = "What is the capital of France?"
46
+ system_message = "You are a helpful assistant."
47
+ model = "gpt-4-turbo"
48
+ temperature = 0.3
49
+ json_mode = False
50
+
51
+ # Call the function with JSON_mode=False
52
+ result = get_completion(
53
+ prompt, system_message, model, temperature, json_mode
54
+ )
55
+
56
+ # Assert that the result is not None
57
+ assert result is not None
58
+
59
+ # Assert that the result has the expected response format
60
+ assert isinstance(result, str)
61
+
62
+
63
+ def test_one_chunk_initial_translation():
64
+ # Define test data
65
+ source_lang = "English"
66
+ target_lang = "Spanish"
67
+ source_text = "Hello, how are you?"
68
+ expected_translation = "Hola, ¿cómo estás?"
69
+
70
+ # Mock the get_completion_content function
71
+ with patch(
72
+ "translation_agent.utils.get_completion"
73
+ ) as mock_get_completion:
74
+ mock_get_completion.return_value = expected_translation
75
+
76
+ # Call the function with test data
77
+ translation = one_chunk_initial_translation(
78
+ source_lang, target_lang, source_text
79
+ )
80
+
81
+ # Assert the expected translation is returned
82
+ assert translation == expected_translation
83
+
84
+ # Assert the get_completion_content function was called with the correct arguments
85
+ expected_system_message = f"You are an expert linguist, specializing in translation from {source_lang} to {target_lang}."
86
+ expected_prompt = f"""This is an {source_lang} to {target_lang} translation, please provide the {target_lang} translation for this text. \
87
+ Do not provide any explanations or text apart from the translation.
88
+ {source_lang}: {source_text}
89
+
90
+ {target_lang}:"""
91
+
92
+ mock_get_completion.assert_called_once_with(
93
+ expected_prompt, system_message=expected_system_message
94
+ )
95
+
96
+
97
+ def test_one_chunk_reflect_on_translation():
98
+ # Define test data
99
+ source_lang = "English"
100
+ target_lang = "Spanish"
101
+ country = "Mexico"
102
+ source_text = "This is a sample source text."
103
+ translation_1 = "Este es un texto de origen de muestra."
104
+
105
+ # Define the expected reflection
106
+ expected_reflection = "The translation is accurate and conveys the meaning of the source text well. However, here are a few suggestions for improvement:\n\n1. Consider using 'texto fuente' instead of 'texto de origen' for a more natural translation of 'source text'.\n2. Add a definite article before 'texto fuente' to improve fluency: 'Este es un texto fuente de muestra.'\n3. If the context allows, you could also use 'texto de ejemplo' as an alternative translation for 'sample text'."
107
+
108
+ # Mock the get_completion_content function
109
+ with patch(
110
+ "translation_agent.utils.get_completion"
111
+ ) as mock_get_completion:
112
+ mock_get_completion.return_value = expected_reflection
113
+
114
+ # Call the function with test data
115
+ reflection = one_chunk_reflect_on_translation(
116
+ source_lang, target_lang, source_text, translation_1, country
117
+ )
118
+
119
+ # Assert that the reflection matches the expected reflection
120
+ assert reflection == expected_reflection
121
+
122
+ # Assert that the get_completion_content function was called with the correct arguments
123
+ expected_prompt = f"""Your task is to carefully read a source text and a translation from {source_lang} to {target_lang}, and then give constructive criticism and helpful suggestions to improve the translation. \
124
+ The final style and tone of the translation should match the style of {target_lang} colloquially spoken in {country}.
125
+
126
+ The source text and initial translation, delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT> and <TRANSLATION></TRANSLATION>, are as follows:
127
+
128
+ <SOURCE_TEXT>
129
+ {source_text}
130
+ </SOURCE_TEXT>
131
+
132
+ <TRANSLATION>
133
+ {translation_1}
134
+ </TRANSLATION>
135
+
136
+ When writing suggestions, pay attention to whether there are ways to improve the translation's \n\
137
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),\n\
138
+ (ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules, and ensuring there are no unnecessary repetitions),\n\
139
+ (iii) style (by ensuring the translations reflect the style of the source text and takes into account any cultural context),\n\
140
+ (iv) terminology (by ensuring terminology use is consistent and reflects the source text domain; and by only ensuring you use equivalent idioms {target_lang}).\n\
141
+
142
+ Write a list of specific, helpful and constructive suggestions for improving the translation.
143
+ Each suggestion should address one specific part of the translation.
144
+ Output only the suggestions and nothing else."""
145
+ expected_system_message = f"You are an expert linguist specializing in translation from {source_lang} to {target_lang}. \
146
+ You will be provided with a source text and its translation and your goal is to improve the translation."
147
+ mock_get_completion.assert_called_once_with(
148
+ expected_prompt, system_message=expected_system_message
149
+ )
150
+
151
+
152
+ @pytest.fixture
153
+ def example_data():
154
+ return {
155
+ "source_lang": "English",
156
+ "target_lang": "Spanish",
157
+ "source_text": "This is a sample source text.",
158
+ "translation_1": "Esta es una traducción de ejemplo.",
159
+ "reflection": "The translation is accurate but could be more fluent.",
160
+ }
161
+
162
+
163
+ @patch("translation_agent.utils.get_completion")
164
+ def test_one_chunk_improve_translation(mock_get_completion, example_data):
165
+ # Set up the mock return value for get_completion_content
166
+ mock_get_completion.return_value = (
167
+ "Esta es una traducción de ejemplo mejorada."
168
+ )
169
+
170
+ # Call the function with the example data
171
+ result = one_chunk_improve_translation(
172
+ example_data["source_lang"],
173
+ example_data["target_lang"],
174
+ example_data["source_text"],
175
+ example_data["translation_1"],
176
+ example_data["reflection"],
177
+ )
178
+
179
+ # Assert that the function returns the expected translation
180
+ assert result == "Esta es una traducción de ejemplo mejorada."
181
+
182
+ # Assert that get_completion was called with the expected arguments
183
+ expected_prompt = f"""Your task is to carefully read, then edit, a translation from {example_data["source_lang"]} to {example_data["target_lang"]}, taking into
184
+ account a list of expert suggestions and constructive criticisms.
185
+
186
+ The source text, the initial translation, and the expert linguist suggestions are delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT>, <TRANSLATION></TRANSLATION> and <EXPERT_SUGGESTIONS></EXPERT_SUGGESTIONS> \
187
+ as follows:
188
+
189
+ <SOURCE_TEXT>
190
+ {example_data["source_text"]}
191
+ </SOURCE_TEXT>
192
+
193
+ <TRANSLATION>
194
+ {example_data["translation_1"]}
195
+ </TRANSLATION>
196
+
197
+ <EXPERT_SUGGESTIONS>
198
+ {example_data["reflection"]}
199
+ </EXPERT_SUGGESTIONS>
200
+
201
+ Please take into account the expert suggestions when editing the translation. Edit the translation by ensuring:
202
+
203
+ (i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),
204
+ (ii) fluency (by applying Spanish grammar, spelling and punctuation rules and ensuring there are no unnecessary repetitions), \
205
+ (iii) style (by ensuring the translations reflect the style of the source text)
206
+ (iv) terminology (inappropriate for context, inconsistent use), or
207
+ (v) other errors.
208
+
209
+ Output only the new translation and nothing else."""
210
+
211
+ expected_system_message = f"You are an expert linguist, specializing in translation editing from English to Spanish."
212
+
213
+ mock_get_completion.assert_called_once_with(
214
+ expected_prompt, expected_system_message
215
+ )
216
+
217
+
218
+ def test_one_chunk_translate_text(mocker):
219
+ # Define test data
220
+ source_lang = "English"
221
+ target_lang = "Spanish"
222
+ country = "Mexico"
223
+ source_text = "Hello, how are you?"
224
+ translation_1 = "Hola, ¿cómo estás?"
225
+ reflection = "The translation looks good, but it could be more formal."
226
+ translation2 = "Hola, ¿cómo está usted?"
227
+
228
+ # Mock the helper functions
229
+ mock_initial_translation = mocker.patch(
230
+ "translation_agent.utils.one_chunk_initial_translation",
231
+ return_value=translation_1,
232
+ )
233
+ mock_reflect_on_translation = mocker.patch(
234
+ "translation_agent.utils.one_chunk_reflect_on_translation",
235
+ return_value=reflection,
236
+ )
237
+ mock_improve_translation = mocker.patch(
238
+ "translation_agent.utils.one_chunk_improve_translation",
239
+ return_value=translation2,
240
+ )
241
+
242
+ # Call the function being tested
243
+ result = one_chunk_translate_text(
244
+ source_lang, target_lang, source_text, country
245
+ )
246
+
247
+ # Assert the expected result
248
+ assert result == translation2
249
+
250
+ # Assert that the helper functions were called with the correct arguments
251
+ mock_initial_translation.assert_called_once_with(
252
+ source_lang, target_lang, source_text
253
+ )
254
+ mock_reflect_on_translation.assert_called_once_with(
255
+ source_lang, target_lang, source_text, translation_1, country
256
+ )
257
+ mock_improve_translation.assert_called_once_with(
258
+ source_lang, target_lang, source_text, translation_1, reflection
259
+ )
260
+
261
+
262
+ def test_num_tokens_in_string():
263
+ # Test case 1: Empty string
264
+ assert num_tokens_in_string("") == 0
265
+
266
+ # Test case 2: Simple string
267
+ assert num_tokens_in_string("Hello, world!") == 4
268
+
269
+ # Test case 3: String with special characters
270
+ assert (
271
+ num_tokens_in_string(
272
+ "This is a test string with special characters: !@#$%^&*()"
273
+ )
274
+ == 16
275
+ )
276
+
277
+ # Test case 4: String with non-ASCII characters
278
+ assert num_tokens_in_string("Héllò, wörld! 你好,世界!") == 17
279
+
280
+ # Test case 5: Long string
281
+ long_string = (
282
+ "Lorem ipsum dolor sit amet, consectetur adipiscing elit. " * 10
283
+ )
284
+ assert num_tokens_in_string(long_string) == 101
285
+
286
+ # Test case 6: Different encoding
287
+ assert (
288
+ num_tokens_in_string("Hello, world!", encoding_name="p50k_base") == 4
289
+ )