Tanaka-corpus / README.md
Hoshikuzu's picture
Update README.md
29017ea verified
metadata
language:
  - en
  - ja
task_categories:
  - translation
dataset_info:
  features:
    - name: translation
      struct:
        - name: en
          dtype: string
        - name: ja
          dtype: string
  splits:
    - name: train
      num_bytes: 15324918
      num_examples: 147865
  download_size: 8480328
  dataset_size: 15324918
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for Tanaka-corpus

Dataset Summary

This corpus is extracted from the Tanaka-corpus , with Japanese-English pairs. For more information, see website below! (https://www.edrdg.org/wiki/index.php/Tanaka_Corpus)

The corpus was compiled by Professor Yasuhito Tanaka at Hyogo University and his students, as described in his Pacling2001 paper (it is described in the paper as the "Past Compilation Method".) At Pacling2001 Professor Tanaka released copies of the corpus, and stated that it is in the public domain. According to Professor Christian Boitet, Professor Tanaka did not think the collection was of a very good standard. (Sadly, Prof. Tanaka died in early 2003.)

At the 2002 Papillon workshop in Tokyo, Professor Boitet included a copy of the corpus in a CD, distributed to participants, and suggested that it may serve as examples in a dictionary. Jim Breen realised it had the potential to be a source of example sentences in the WWWJDIC server. He edited, reformatted and indexed the corpus and linked it at the word level to the dictionary function in the server. (see below)

The inclusion of the Corpus in the WWWJDIC server exposed it to a wide audience, and a number of other systems incorporated the corpus into their operation. It also began to be used in some research projects in natural language processing.

In 2006 the Corpus was incorporated into the Tatoeba Project being developed by Trang Ho to provide a sentence-based multi-lingual resource. That project is now the "home" of the Corpus.

How to use

from datasets import load_dataset
dataset = load_dataset("Hoshikuzu/Tanaka-corpus")

If data loading times are too long and boring, use Streaming.

from datasets import load_dataset
dataset = load_dataset("Hoshikuzu/Tanaka-corpus", streaming=True)

Data Instances

For example:

{
  'en': "He doesn't see his family in his busy life.",
  'ja': '彼は忙しい生活の中で家族と会うことがない。'
}

Compilation

Professor Tanaka's students were given the task of collecting 300 sentence pairs each. After several years, 212,000 sentence pairs had been collected.

From inspection, it appears that many of the sentence pairs have been derived from textbooks, e.g. books used by Japanese students of English. Some are lines of songs, others are from popular books and Biblical passages.

The original collection contained large numbers of errors, both in the Japanese and English. Many of the errors were in spelling and transcription, although in a significant number of cases the Japanese and English contained grammatical, syntactic, etc. errors, or the translations did not match at all.

Data Splits

Only a train split is provided.

Warning/Disclaimer

The Corpus is a useful and interesting collection of matched Japanese and English sentence pairs, however it cannot be regarded as containing natural or representative examples of text in either language. This is because of the way it was originally compiled and the artificial nature of the sources. Also it still contains a large number of errors and repetitions. It certainly should not be used for any statistical analyses of the text. While the Corpus appears to be adequate and useful as a source of examples of word usage, the user is advised to be cautious and critical. The following points should be considered:

  1. the sentences were typed in by students in order to meet a work requirement. Initially there were many mistakes both in the Japanese and English. While many have been corrected some still remain.
  2. some clearly contain translations into Japanese of English sentences, and often do not represent the most natural way things are said in Japanese (overuse of pronouns, etc.).
  3. others contain English translations which are very literal translations of the Japanese, and perhaps came from simple machine translation systems.
  4. it should be noted that many of the sentences are of the kinds found in older "study for entrance exam books", and thus are likely to have contrived examples of grammar usage or slightly archaic English examples passed down from generation to generation by (Japanese) English teachers. They are not examples of normally-used modern English, and should not always be regarded as suitable for English study.
  5. please DO NOT use the original file without exercising considerable caution. It contains thousands of errors and duplications in both the Japanese and English sentences. If any project wants to use the Tanaka material, it is STRONGLY recommended that the updated file is used.

Credits

Many people have played a significant role in making the Tanaka Corpus available and useful:

  1. Christian Boitet, who alerted Jim Breen to its existence;
  2. Paul Blay, who for several years maintained and extended the Corpus, and did extensive work on the indices;
  3. Trang Ho and her team of collaborators, who made a home for the WWWJDIC indices in the Tatoeba Project, and has greatly enhanced the Corpus;
  4. Francis Bond, who has been a recent contributor and user of the Corpus for NLP research.