The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<0: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 1: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 2: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 3: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 4: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 5: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 6: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 7: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 8: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 9: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 10: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 11: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 12: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 13: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>>
to
{'0': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '1': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '2': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '3': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '4': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '5': {'type': Value(dtype='string', id=None), 'sentence-1-token-indic
...
), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '6': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '7': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '8': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '9': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '10': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2122, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<0: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 1: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 2: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 3: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 4: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 5: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 6: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 7: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 8: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 9: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 10: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 11: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 12: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>, 13: struct<type: string, sentence-1-token-indices: list<item: int64>, sentence-2-token-indices: list<item: int64>, intention: string>>
              to
              {'0': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '1': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '2': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '3': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '4': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '5': {'type': Value(dtype='string', id=None), 'sentence-1-token-indic
              ...
              ), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '6': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '7': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '8': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '9': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}, '10': {'type': Value(dtype='string', id=None), 'sentence-1-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'sentence-2-token-indices': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'intention': Value(dtype='string', id=None)}}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1524, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

sentence-pair-index
int64
page-sentence-1
int64
type-sentence-1
string
id-sentence-1
string
num_section-sentence-1
int64
page-sentence-2
int64
type-sentence-2
string
id-sentence-2
string
num_section-sentence-2
int64
text-sentence-1
string
text-sentence-2
string
edits-combination
dict
id_version_1
string
id_version_2
string
sentence_pair_id
string
num_paragraph-sentence-1
int64
num_sentence-sentence-1
int64
num_paragraph-sentence-2
int64
num_sentence-sentence-2
int64
0
0
Title
Y6LzLWXlS8E_00_00_00
0
0
Title
WNev_iSes_00_00_00
0
Deconfounded Imitation Learning
Deconfounded Imitation Learning
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00000
null
null
null
null
1
0
Paragraph
Y6LzLWXlS8E_01_00_00
1
null
null
null
null
Anonymous Author(s)
{ "0": { "type": "Deletion", "sentence-1-token-indices": [ 0, 19 ], "sentence-2-token-indices": null, "intention": "Content" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00001
0
0
null
null
2
0
Paragraph
Y6LzLWXlS8E_01_01_00
1
null
null
null
null
AffiliationAddress email
{ "0": { "type": "Deletion", "sentence-1-token-indices": [ 0, 24 ], "sentence-2-token-indices": null, "intention": "Content" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00002
1
0
null
null
3
0
Abstract
Y6LzLWXlS8E_01_02_00
1
0
Abstract
WNev_iSes_01_00_00
1
Abstract
Abstract
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00003
2
0
0
0
4
0
Abstract
Y6LzLWXlS8E_01_03_00
1
0
Abstract
WNev_iSes_01_01_00
1
Standard imitation learning can fail when the expert demonstrators have differentsensory inputs than the imitating agent.
Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 72, 88 ], "sentence-2-token-indices": [ 72, 89 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00004
3
0
1
0
5
0
Abstract
Y6LzLWXlS8E_01_03_01
1
0
Abstract
WNev_iSes_01_01_01
1
This partial observability gives rise tohidden confounders in the causal graph, which lead to the failure to imitate.
This is because partial observability gives rise to hidden confounders in the causal graph.
{ "0": { "type": "Insertion", "sentence-1-token-indices": null, "sentence-2-token-indices": [ 5, 15 ], "intention": "Content" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 39, 47 ], "sentence-2-token-indices": [ 49, 58 ], "intention": "Improve-grammar-Typo" }, "2": { "type": "Substitute", "sentence-1-token-indices": [ 74, 118 ], "sentence-2-token-indices": [ 85, 91 ], "intention": "Content" }, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00005
3
1
1
1
6
0
Abstract
Y6LzLWXlS8E_01_03_02
1
0
Abstract
WNev_iSes_01_01_02
1
Webreak down the space of confounded imitation learning problems and identify threesettings with different data requirements in which the correct imitation policy canbe identified.
We break down the space of confounded imitation learning problems and identify three settings with different data requirements in which the correct imitation policy can be identified.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 0, 7 ], "sentence-2-token-indices": [ 0, 8 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 78, 91 ], "sentence-2-token-indices": [ 79, 93 ], "intention": "Improve-grammar-Typo" }, "2": { "type": "Substitute", "sentence-1-token-indices": [ 163, 168 ], "sentence-2-token-indices": [ 165, 171 ], "intention": "Improve-grammar-Typo" }, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00006
3
2
1
2
7
0
Abstract
Y6LzLWXlS8E_01_03_03
1
0
Abstract
WNev_iSes_01_01_03
1
We then introduce an algorithm for deconfounded imitation learning,which trains an inference model jointly with a latent-conditional policy.
We then introduce an algorithm for deconfounded imitation learning, which trains an inference model jointly with a latent-conditional policy.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 58, 72 ], "sentence-2-token-indices": [ 58, 73 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00007
3
3
1
3
8
0
Abstract
Y6LzLWXlS8E_01_03_04
1
0
Abstract
WNev_iSes_01_01_04
1
At testtime, the agent alternates between updating its belief over the latent and actingunder the belief.
At test time, the agent alternates between updating its belief over the latent and acting under the belief.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 3, 12 ], "sentence-2-token-indices": [ 3, 13 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 82, 93 ], "sentence-2-token-indices": [ 83, 95 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00008
3
4
1
4
9
0
Abstract
Y6LzLWXlS8E_01_03_05
1
0
Abstract
WNev_iSes_01_01_05
1
We show in theory and practice that this algorithm convergesto the correct interventional policy, solves the confounding issue, and can undercertain assumptions achieve an asymptotically optimal imitation performance
We show in theory and practice that this algorithm converges to the correct interventional policy, solves the confounding issue, and can under certain assumptions achieve an asymptotically optimal imitation performance.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 51, 62 ], "sentence-2-token-indices": [ 51, 63 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 136, 148 ], "sentence-2-token-indices": [ 137, 150 ], "intention": "Improve-grammar-Typo" }, "2": { "type": "Substitute", "sentence-1-token-indices": [ 205, 216 ], "sentence-2-token-indices": [ 207, 219 ], "intention": "Improve-grammar-Typo" }, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00009
3
5
1
5
10
0
Section
Y6LzLWXlS8E_02_00_00
2
0
Section
WNev_iSes_02_00_00
2
Introduction
Introduction
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00010
null
null
null
null
11
0
Paragraph
Y6LzLWXlS8E_02_00_00
2
0
Paragraph
WNev_iSes_02_00_00
2
In imitation learning (IL), an agent learns a policy directly from expert demonstrations withoutrequiring the specification of a reward function.
In imitation learning (IL), an agent learns a policy directly from expert demonstrations without requiring the specification of a reward function.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 89, 105 ], "sentence-2-token-indices": [ 89, 106 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00011
0
0
0
0
12
0
Paragraph
Y6LzLWXlS8E_02_00_01
2
0
Paragraph
WNev_iSes_02_00_01
2
This paradigm could be essential for solving realworld problems in autonomous driving and robotics where reward functions can be difficult to shapeand online learning may be dangerous.
This paradigm could be essential for solving realworld problems in autonomous driving and robotics where reward functions can be difficult to shape and online learning may be dangerous.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 142, 150 ], "sentence-2-token-indices": [ 142, 151 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00012
0
1
0
1
13
0
Paragraph
Y6LzLWXlS8E_02_00_02
2
0
Paragraph
WNev_iSes_02_00_02
2
However, standard IL requires that the conditions under whichthe agent operates exactly match those encountered by the expert.
However, standard IL requires that the conditions under which the agent operates exactly match those encountered by the expert.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 56, 64 ], "sentence-2-token-indices": [ 56, 65 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00013
0
2
0
2
14
0
Paragraph
Y6LzLWXlS8E_02_00_03
2
0
Paragraph
WNev_iSes_02_00_03
2
In particular, they assume thatthere are no latent confounders —variables that affect the expert behavior, but that are not observed bythe agent.
In particular, they assume that there are no latent confounders —variables that affect the expert behavior, but that are not observed by the agent.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 27, 36 ], "sentence-2-token-indices": [ 27, 37 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 133, 138 ], "sentence-2-token-indices": [ 134, 140 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00014
0
3
0
3
15
0
Paragraph
Y6LzLWXlS8E_02_00_04
2
0
Paragraph
WNev_iSes_02_00_04
2
This assumption is often unrealistic.
This assumption is often unrealistic.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00015
0
4
0
4
16
0
Paragraph
Y6LzLWXlS8E_02_00_05
2
0
Paragraph
WNev_iSes_02_00_05
2
Consider a human driver who is aware of the weatherforecast and lowers its speed in icy conditions, even if those are not visible from observations.
Consider a human driver who is aware of the weather forecast and lowers its speed in icy conditions, even if those are not visible from observations.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 44, 59 ], "sentence-2-token-indices": [ 44, 60 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00016
0
5
0
5
17
0
Paragraph
Y6LzLWXlS8E_02_00_06
2
0
Paragraph
WNev_iSes_02_00_06
2
Animitator agent without access to the weather forecast will not be able to adapt to such conditions.
An imitator agent without access to the weather forecast will not be able to adapt to such conditions.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 0, 10 ], "sentence-2-token-indices": [ 0, 11 ], "intention": "Language" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00017
0
6
0
6
18
0
Paragraph
Y6LzLWXlS8E_02_01_00
2
0
Paragraph
WNev_iSes_02_01_00
2
In such a situation, an imitating agent may take their own past actions as evidence for the values ofthe confounder.
In such a situation, an imitating agent may take their own past actions as evidence for the values of the confounder.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 99, 104 ], "sentence-2-token-indices": [ 99, 105 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00018
1
0
1
0
19
0
Paragraph
Y6LzLWXlS8E_02_01_01
2
0
Paragraph
WNev_iSes_02_01_01
2
A self-driving car, for instance, could conclude that it is driving fast, thus there can beno ice on the road.
A self-driving car, for instance, could conclude that it is driving fast, thus there can be no ice on the road.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 89, 93 ], "sentence-2-token-indices": [ 89, 94 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00019
1
1
1
1
20
0
Paragraph
Y6LzLWXlS8E_02_01_02
2
0
Paragraph
WNev_iSes_02_01_02
2
This issue of causal delusion was first pointed out in Ortega and Braun [2010a,b]and studied in more depth by Ortega et al.
This issue of causal delusion was first pointed out in Ortega and Braun [2010a,b] and studied in more depth by Ortega et al.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 72, 84 ], "sentence-2-token-indices": [ 72, 85 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00020
1
2
1
2
21
0
Paragraph
Y6LzLWXlS8E_02_01_03
2
0
Paragraph
WNev_iSes_02_01_03
2
The authors analyze the causal structure of thisproblem and argue that an imitator needs to learn a policy that corresponds to a certain interventionaldistribution.
The authors analyze the causal structure of this problem and argue that an imitator needs to learn a policy that corresponds to a certain interventional distribution.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 44, 55 ], "sentence-2-token-indices": [ 44, 56 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 137, 164 ], "sentence-2-token-indices": [ 138, 166 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00021
1
3
1
3
22
0
Paragraph
Y6LzLWXlS8E_02_01_04
2
0
Paragraph
WNev_iSes_02_01_04
2
They then show that the classic DAgger algorithm [Ross et al., 2011], which requiresquerying experts at each time step, solves this problem and converges to the interventional policy.
They then show that the classic DAgger algorithm [Ross et al., 2011], which requires querying experts at each time step, solves this problem and converges to the interventional policy.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 76, 92 ], "sentence-2-token-indices": [ 76, 93 ], "intention": "Language" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00022
1
4
1
4
23
0
Paragraph
Y6LzLWXlS8E_02_02_00
2
0
Paragraph
WNev_iSes_02_02_00
2
In this paper, we present a solution to a confounded IL problem, where both the expert policy andthe environment dynamics are Markovian.
In this paper, we present a solution to a confounded IL problem, where both the expert policy and the environment dynamics are Markovian.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 94, 100 ], "sentence-2-token-indices": [ 94, 101 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00023
2
0
2
0
24
0
Paragraph
Y6LzLWXlS8E_02_02_01
2
0
Paragraph
WNev_iSes_02_02_01
2
The solution does not require querying experts.
The solution does not require querying experts.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00024
2
1
2
1
25
0
Paragraph
Y6LzLWXlS8E_02_02_02
2
0
Paragraph
WNev_iSes_02_02_02
2
We firstpresent a characterization of confounded IL problems depending on properties of the environmentand expert policy (section 3).
We first present a characterization of confounded IL problems depending on properties of the environment and expert policy (Section 3).
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 3, 15 ], "sentence-2-token-indices": [ 3, 16 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 92, 106 ], "sentence-2-token-indices": [ 93, 108 ], "intention": "Improve-grammar-Typo" }, "2": { "type": "Substitute", "sentence-1-token-indices": [ 121, 129 ], "sentence-2-token-indices": [ 123, 131 ], "intention": "Improve-grammar-Typo" }, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00025
2
2
2
2
26
0
Paragraph
Y6LzLWXlS8E_02_02_03
2
0
Paragraph
WNev_iSes_02_02_03
2
We then show theoretically that an imitating agent can learn behaviorsthat approach optimality when the above Markov assumptions and a recurrence property hold.
We then show theoretically that an imitating agent can learn behaviors that approach optimality when the above Markov assumptions and a recurrence property hold.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 61, 74 ], "sentence-2-token-indices": [ 61, 75 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00026
2
3
2
3
27
0
Paragraph
Y6LzLWXlS8E_02_03_00
2
1
Paragraph
WNev_iSes_02_03_00
2
We then introduce a practical algorithm for deconfounded imitation learning that does not require expert queries (section 4).
We then introduce a practical algorithm for deconfounded imitation learning that does not require expert queries (Section 4).
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 113, 121 ], "sentence-2-token-indices": [ 113, 121 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00027
3
0
3
0
28
0
Paragraph
Y6LzLWXlS8E_02_03_01
2
1
Paragraph
WNev_iSes_02_03_01
2
An agent jointly learns an inference network for the valueof latent variables that explain the environment dynamics as well as a latent-conditional policy.
An agent jointly learns an inference network for the value of latent variables that explain the environment dynamics as well as a latent-conditional policy.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 53, 60 ], "sentence-2-token-indices": [ 53, 61 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00028
3
1
3
1
29
0
Paragraph
Y6LzLWXlS8E_02_04_00
2
1
Paragraph
WNev_iSes_02_03_02
2
At test time, the agent iteratively samples latents from its belief, acts in the environ- ment, and updates the belief based on the environment dynamics.
At test time, the agent iteratively samples latents from its belief, acts in the environment, and updates the belief based on the environment dynamics.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 81, 95 ], "sentence-2-token-indices": [ 81, 93 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00029
4
0
3
2
30
0
Paragraph
Y6LzLWXlS8E_02_04_01
2
1
Paragraph
WNev_iSes_02_03_03
2
An imitator steering a selfdriving car, for instance, would learn how to infer the weather condition from the dynamics ofthe car on the road.
An imitator steering a self-driving car, for instance, would learn how to infer the weather condition from the dynamics of the car on the road.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 23, 34 ], "sentence-2-token-indices": [ 23, 35 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 119, 124 ], "sentence-2-token-indices": [ 120, 126 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00030
4
1
3
3
31
0
Paragraph
Y6LzLWXlS8E_02_04_02
2
1
Paragraph
WNev_iSes_02_03_04
2
This inference model can be applied both to its own online experienceas well as to expert trajectories, allowing it to imitate the behavior adequate for the weather.
This inference model can be applied both to its own online experience as well as to expert trajectories, allowing it to imitate the behavior adequate for the weather.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 59, 71 ], "sentence-2-token-indices": [ 59, 72 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00031
4
2
3
4
32
1
Figure
Y6LzLWXlS8E_02_05_00
2
1
Figure
WNev_iSes_02_04_00
2
[Figure]
[Figure]
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00032
5
0
4
0
33
1
Paragraph
Y6LzLWXlS8E_02_06_00
2
1
Paragraph
WNev_iSes_02_05_00
2
Finally, our deconfounded imitation learning algorithm isdemonstrated in a multi-armed bandit problem.
Finally, our deconfounded imitation learning algorithm is demonstrated in a multi-armed bandit problem.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 55, 69 ], "sentence-2-token-indices": [ 55, 70 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00033
6
0
5
0
34
1
Paragraph
Y6LzLWXlS8E_02_06_01
2
1
Paragraph
WNev_iSes_02_05_01
2
We showthat the agent quickly adapts to the unobserved propertiesof the environment and then behaves optimally (section 5).
We show that the agent quickly adapts to the unobserved properties of the environment and then behaves optimally (Section 5).
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 3, 11 ], "sentence-2-token-indices": [ 3, 12 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 55, 67 ], "sentence-2-token-indices": [ 56, 69 ], "intention": "Improve-grammar-Typo" }, "2": { "type": "Substitute", "sentence-1-token-indices": [ 111, 119 ], "sentence-2-token-indices": [ 113, 121 ], "intention": "Improve-grammar-Typo" }, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00034
6
1
5
1
35
1
Section
Y6LzLWXlS8E_03_00_00
3
1
Section
WNev_iSes_03_00_00
3
Imitation learning and latent confounders
Imitation learning and latent confounders
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00035
null
null
null
null
36
1
Paragraph
Y6LzLWXlS8E_03_00_00
3
1
Paragraph
WNev_iSes_03_00_00
3
We begin by introducing the problem of confounded imitation learning.
We begin by introducing the problem of confounded imitation learning.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00036
0
0
0
0
37
1
Paragraph
Y6LzLWXlS8E_03_00_01
3
1
Paragraph
WNev_iSes_03_00_01
3
Following Ortega et al.
Following Ortega et al.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00037
0
1
0
1
38
1
Paragraph
Y6LzLWXlS8E_03_00_02
3
1
Paragraph
WNev_iSes_03_00_02
3
[2021], we discusshow behavioral cloning fails in the presence of latent confounders.
[2021], we discuss how behavioral cloning fails in the presence of latent confounders.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 11, 21 ], "sentence-2-token-indices": [ 11, 22 ], "intention": "Language" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00038
0
2
0
2
39
1
Paragraph
Y6LzLWXlS8E_03_00_03
3
1
Paragraph
WNev_iSes_03_00_03
3
We then define the interventional policy, whichsolves the problem of confounded imitation learning.
We then define the interventional policy, which solves the problem of confounded imitation learning.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 42, 53 ], "sentence-2-token-indices": [ 42, 54 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00039
0
3
0
3
40
1
Section
Y6LzLWXlS8E_04_00_00
4
1
Section
WNev_iSes_04_00_00
4
2.1 Imitation learning
2.1 Imitation learning
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00040
null
null
null
null
41
1
Paragraph
Y6LzLWXlS8E_04_00_00
4
1
Paragraph
WNev_iSes_04_00_00
4
Imitation learning learns a policy from a dataset of expert demonstrations via supervised learning.
Imitation learning learns a policy from a dataset of expert demonstrations via supervised learning.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00041
0
0
0
0
42
1
Paragraph
Y6LzLWXlS8E_04_01_00
4
1
Paragraph
WNev_iSes_04_00_01
4
The expert is a policy that acts in a (reward-free)
The expert is a policy that acts in a (reward-free)
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00042
1
0
0
1
43
1
Paragraph
Y6LzLWXlS8E_04_01_01
4
1
Paragraph
WNev_iSes_04_00_02
4
Markov decision process (MDP) defined by a tuple M = ( S , A , P ( s ′ | s, a ) , P ( s 0 )) , where S is the set of states, A is the set of actions, P ( s ′ | s, a ) isthe transition probability, and P ( s 0 ) is a distribution over initial states.
Markov decision process (MDP) defined by a tuple M = ( S , A , P ( s ′ | s, a ) , P ( s 0 )) , where S is the set of states, A is the set of actions, P ( s ′ | s, a ) is the transition probability, and P ( s 0 ) is a distribution over initial states.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 167, 172 ], "sentence-2-token-indices": [ 167, 173 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00043
1
1
0
2
44
1
Paragraph
Y6LzLWXlS8E_04_02_01
4
1
Paragraph
WNev_iSes_04_00_03
4
The expert’s interaction withthe environment produces a trajectory τ =
The expert’s interaction with the environment produces a trajectory τ =
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 25, 32 ], "sentence-2-token-indices": [ 25, 33 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00044
2
1
0
3
45
1
Paragraph
Y6LzLWXlS8E_04_02_02
4
1
Paragraph
WNev_iSes_04_00_04
4
( s 0 , a 0 , . .
( s 0 , a 0 , . .
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00045
2
2
0
4
46
1
Paragraph
Y6LzLWXlS8E_04_02_03
4
1
Paragraph
WNev_iSes_04_00_05
4
. , a T − 1 , s T ) .
. , a T − 1 , s T ) .
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00046
2
3
0
5
47
1
Paragraph
Y6LzLWXlS8E_04_02_04
4
1
Paragraph
WNev_iSes_04_00_06
4
The expert may maximize theexpectation over some reward function, but this is not necessary (and some tasks cannot be expressedthrough Markov rewards Abel et al.
The expert may maximize the expectation over some reward function, but this is not necessary (and some tasks cannot be expressed through Markov rewards Abel et al.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 24, 38 ], "sentence-2-token-indices": [ 24, 39 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 118, 134 ], "sentence-2-token-indices": [ 119, 136 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00047
2
4
0
6
48
1
Paragraph
Y6LzLWXlS8E_04_02_05
4
1
Paragraph
WNev_iSes_04_01_00
4
In the simplest form of imitation learning, a behavioralcloning policy π η ( a | s ) parametrized by η is learned by minimizing the loss − (cid:80) s,a ∈D log π η ( a | s ) ,where D is the dataset of state-action pairs collected by the expert’s policy.
In the simplest form of imitation learning, a behavioral cloning policy π η ( a | s ) parametrized by η is learned by minimizing the loss − (cid:80) s,a ∈D log π η ( a | s ) , where D is the dataset of state-action pairs collected by the expert’s policy.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 46, 63 ], "sentence-2-token-indices": [ 46, 64 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 173, 179 ], "sentence-2-token-indices": [ 174, 181 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00048
2
5
1
0
49
1
Section
Y6LzLWXlS8E_05_00_00
5
1
Section
WNev_iSes_05_00_00
5
2.2 Confounded imitation learning
2.2 Confounded imitation learning
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00049
null
null
null
null
50
1
Paragraph
Y6LzLWXlS8E_05_00_00
5
1
Paragraph
WNev_iSes_05_00_00
5
We now extend the imitation learning setup to allow for some variables θ ∈ Θ that are observed bythe expert, but not the imitator.
We now extend the imitation learning setup to allow for some variables θ ∈ Θ that are observed by the expert, but not the imitator.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 95, 100 ], "sentence-2-token-indices": [ 95, 101 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00050
0
0
0
0
51
1
Paragraph
Y6LzLWXlS8E_05_00_01
5
1
Paragraph
WNev_iSes_05_00_01
5
We define a family of Markov Decision processes as a latent space Θ ,a distribution P ( θ ) , and for each θ ∈ Θ , a reward-free MDP M θ = ( S , A , P ( s ′ | s, a, θ ) , P ( s 0 | θ )) .
We define a family of Markov Decision processes as a latent space Θ , a distribution P ( θ ) , and for each θ ∈ Θ , a reward-free MDP M θ = ( S , A , P ( s ′ | s, a, θ ) , P ( s 0 | θ )) .
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 68, 70 ], "sentence-2-token-indices": [ 68, 71 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00051
0
1
0
1
52
1
Paragraph
Y6LzLWXlS8E_05_01_00
5
1
Paragraph
WNev_iSes_05_01_00
5
We assume there exists an expert policy π exp ( a | s, θ ) for each MDP.
We assume there exists an expert policy π exp ( a | s, θ ) for each MDP.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00052
1
0
1
0
53
1
Paragraph
Y6LzLWXlS8E_05_01_01
5
1
Paragraph
WNev_iSes_05_01_01
5
When it interacts with theenvironment, it generates the following distribution over trajectories τ :
When it interacts with the environment, it generates the following distribution over trajectories τ :
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 23, 38 ], "sentence-2-token-indices": [ 23, 39 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00053
1
1
1
1
54
1
Equation
Y6LzLWXlS8E_05_02_00
5
1
Equation
WNev_iSes_05_02_00
5
[Equation]
[Equation]
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00054
2
0
2
0
55
1
Paragraph
Y6LzLWXlS8E_05_03_00
5
1
Paragraph
WNev_iSes_05_03_00
5
The imitator does not observe the latent θ .
The imitator does not observe the latent θ .
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00055
3
0
3
0
56
1
Paragraph
Y6LzLWXlS8E_05_03_01
5
1
Paragraph
WNev_iSes_05_03_01
5
It may thus need to implicitly infer it from the pasttransitions, so we take it to be a non-Markovian policy π η ( a t | s 1 , a 1 , . . .
It may thus need to implicitly infer it from the past transitions, so we take it to be a non-Markovian policy π η ( a t | s 1 , a 1 , . . .
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 49, 65 ], "sentence-2-token-indices": [ 49, 66 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00056
3
1
3
1
57
1
Paragraph
Y6LzLWXlS8E_05_03_02
5
1
Paragraph
WNev_iSes_05_03_02
5
, s t ) , parameterized by η .
, s t ) , parameterized by η .
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00057
3
2
3
2
58
1
Paragraph
Y6LzLWXlS8E_05_04_00
5
1
Paragraph
WNev_iSes_05_04_00
5
The imitator generates the following distribution over trajectories:
The imitator generates the following distribution over trajectories:
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00058
4
0
4
0
59
1
Equation
Y6LzLWXlS8E_05_05_00
5
1
Equation
WNev_iSes_05_05_00
5
[Equation]
[Equation]
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00059
5
0
5
0
60
1
Paragraph
Y6LzLWXlS8E_05_06_00
5
1
Paragraph
WNev_iSes_05_06_00
5
The Bayesian networks associated to these distributions are shown in figure 1.
The Bayesian networks associated to these distributions are shown in Figure 1.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 69, 75 ], "sentence-2-token-indices": [ 69, 75 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00060
6
0
6
0
61
1
Paragraph
Y6LzLWXlS8E_05_07_00
5
1
Paragraph
WNev_iSes_05_07_00
5
The goal of imitation learning in this setting is to learn imitator parameters η such that when theimitator is executed in the environment, the imitator agrees with the expert’s decisions, meaning wewish to maximise
The goal of imitation learning in this setting is to learn imitator parameters η such that when the imitator is executed in the environment, the imitator agrees with the expert’s decisions, meaning we wish to maximise
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 96, 107 ], "sentence-2-token-indices": [ 96, 108 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 197, 203 ], "sentence-2-token-indices": [ 198, 205 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00061
7
0
7
0
62
1
Equation
Y6LzLWXlS8E_05_08_00
5
2
Equation
WNev_iSes_05_08_00
5
[Equation]
[Equation]
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00062
8
0
8
0
63
1
Paragraph
Y6LzLWXlS8E_05_09_00
5
2
Paragraph
WNev_iSes_05_09_00
5
If the expert solves some task (e. g. maximizes some reward function), this amounts to solving thesame task.
If the expert solves some task (e. g. maximizes some reward function), this amounts to solving the same task.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 95, 102 ], "sentence-2-token-indices": [ 95, 103 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00063
9
0
9
0
64
2
Section
Y6LzLWXlS8E_06_00_00
6
2
Section
WNev_iSes_06_00_00
6
2.3 Naive behavioral cloning
2.3 Naive behavioral cloning
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00064
null
null
null
null
65
2
Paragraph
Y6LzLWXlS8E_06_00_00
6
2
Paragraph
WNev_iSes_06_00_00
6
If we have access to a data set of expert demonstrations, one can learn an imitator via behavioralcloning on the expert’s demonstrations.
If we have access to a data set of expert demonstrations, one can learn an imitator via behavioral cloning on the expert’s demonstrations.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 88, 105 ], "sentence-2-token-indices": [ 88, 106 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00065
0
0
0
0
66
2
Paragraph
Y6LzLWXlS8E_06_00_01
6
2
Paragraph
WNev_iSes_06_00_01
6
At optimality, this learns the conditional policy :
At optimality, this learns the conditional policy :
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00066
0
1
0
1
67
2
Equation
Y6LzLWXlS8E_06_01_00
6
2
Equation
WNev_iSes_06_01_00
6
[Equation]
[Equation]
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00067
1
0
1
0
68
2
Paragraph
Y6LzLWXlS8E_06_02_00
6
2
Paragraph
WNev_iSes_06_02_00
6
Following Ortega et al.
Following Ortega et al.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00068
2
0
2
0
69
2
Paragraph
Y6LzLWXlS8E_06_02_01
6
2
Paragraph
WNev_iSes_06_02_01
6
[2021], consider the following example of a confounded multi-armed banditwith
[2021], consider the following example of a confounded multi-armed bandit with A =
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 67, 77 ], "sentence-2-token-indices": [ 67, 82 ], "intention": "Content" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00069
2
1
2
1
70
2
Paragraph
Y6LzLWXlS8E_06_02_02
6
2
Paragraph
WNev_iSes_06_02_02
6
A = Θ = { 1 , . .
Θ = { 1 , . .
{ "0": { "type": "Deletion", "sentence-1-token-indices": [ 0, 3 ], "sentence-2-token-indices": null, "intention": "Language" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00070
2
2
2
2
71
2
Paragraph
Y6LzLWXlS8E_06_02_03
6
2
Paragraph
WNev_iSes_06_02_03
6
, 5 } and S = { 0 , 1 } :
, 5 } and S = { 0 , 1 } :
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00071
2
3
2
3
72
2
Equation
Y6LzLWXlS8E_06_03_00
6
2
Equation
WNev_iSes_06_03_00
6
[Equation]
[Equation] [Equation]
{ "0": { "type": "Insertion", "sentence-1-token-indices": null, "sentence-2-token-indices": [ 11, 21 ], "intention": "Format" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00072
3
0
3
0
73
2
Paragraph
Y6LzLWXlS8E_06_04_00
6
2
Paragraph
WNev_iSes_06_05_00
6
The expert knows which bandit arm is special (and labeled by θ ) and pulls it with high probability,while the imitating agent does not have access to this information.
The expert knows which bandit arm is special (and labeled by θ ) and pulls it with high probability, while the imitating agent does not have access to this information.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 88, 105 ], "sentence-2-token-indices": [ 88, 106 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00073
4
0
5
0
74
2
Paragraph
Y6LzLWXlS8E_06_05_00
6
2
Paragraph
WNev_iSes_06_06_00
6
If we roll out the naive behavioral cloning policy in this environment, shown in Figure 2, we see thecausal delusion at work.
If we roll out the naive behavioral cloning policy in this environment, shown in Figure 2, we see the causal delusion at work.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 98, 107 ], "sentence-2-token-indices": [ 98, 108 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00074
5
0
6
0
75
2
Paragraph
Y6LzLWXlS8E_06_05_01
6
2
Paragraph
WNev_iSes_06_06_01
6
At time t , the latent that is inferred by p cond takes past actions as evidencefor the latent variable.
At time t , the latent that is inferred by p cond takes past actions as evidence for the latent variable.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 72, 83 ], "sentence-2-token-indices": [ 72, 84 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00075
5
1
6
1
76
2
Paragraph
Y6LzLWXlS8E_06_05_02
6
2
Paragraph
WNev_iSes_06_06_02
6
This makes sense on the expert demonstrations, as the expert is cognizantof the latent variable.
This makes sense on the expert demonstrations, as the expert is cognizant of the latent variable.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 64, 75 ], "sentence-2-token-indices": [ 64, 76 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00076
5
2
6
2
77
2
Paragraph
Y6LzLWXlS8E_06_05_03
6
2
Paragraph
WNev_iSes_06_06_03
6
However, during an imitator roll-out, the past actions are not evidence of thelatent, as the imitator is blind to it.
However, during an imitator roll-out, the past actions are not evidence of the latent, as the imitator is blind to it.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 75, 85 ], "sentence-2-token-indices": [ 75, 86 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00077
5
3
6
3
78
2
Paragraph
Y6LzLWXlS8E_06_05_04
6
2
Paragraph
WNev_iSes_06_06_04
6
Concretely, the imitator will take its first action uniformly andlater tends to repeat that action, as it mistakenly takes the first action to be evidence for the latent.
Concretely, the imitator will take its first action uniformly and later tends to repeat that action, as it mistakenly takes the first action to be evidence for the latent.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 62, 70 ], "sentence-2-token-indices": [ 62, 71 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00078
5
4
6
4
79
2
Section
Y6LzLWXlS8E_07_00_00
7
2
Section
WNev_iSes_07_00_00
7
2.4 Interventional policy
2.4 Interventional policy
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00079
null
null
null
null
80
2
Paragraph
Y6LzLWXlS8E_07_00_00
7
2
Paragraph
WNev_iSes_07_00_00
7
A solution to this issue is to only take as evidence the data that was actually informed by the latent,which are the transitions.
A solution to this issue is to only take as evidence the data that was actually informed by the latent, which are the transitions.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 96, 108 ], "sentence-2-token-indices": [ 96, 109 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00080
0
0
0
0
81
2
Paragraph
Y6LzLWXlS8E_07_00_01
7
2
Paragraph
WNev_iSes_07_00_01
7
This defines the following imitator policy:
This defines the following imitator policy:
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00081
0
1
0
1
82
2
Equation
Y6LzLWXlS8E_07_01_00
7
2
Equation
WNev_iSes_07_01_00
7
[Equation]
[Equation]
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00082
1
0
1
0
83
2
Paragraph
Y6LzLWXlS8E_07_02_00
7
2
Paragraph
WNev_iSes_07_02_00
7
In a causal framework, that corresponds to treating the choice of past actions as interventions.
In a causal framework, that corresponds to treating the choice of past actions as interventions.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00083
2
0
2
0
84
2
Paragraph
Y6LzLWXlS8E_07_02_01
7
2
Paragraph
WNev_iSes_07_02_01
7
Inthe notation of the do-calculus [Pearl, 2009], this equals p ( a t | s 1 , do( a 1 ) , s 2 , do( a 2 ) , . . .
In the notation of the do-calculus [Pearl, 2009], this equals p ( a t | s 1 , do( a 1 ) , s 2 , do( a 2 ) , . . .
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 0, 5 ], "sentence-2-token-indices": [ 0, 6 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00084
2
1
2
1
85
2
Paragraph
Y6LzLWXlS8E_07_02_02
7
2
Paragraph
WNev_iSes_07_02_02
7
, s t ) .
, s t ) .
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00085
2
2
2
2
86
2
Paragraph
Y6LzLWXlS8E_07_02_03
7
2
Paragraph
WNev_iSes_07_02_03
7
Thepolicy in equation (5) is therefore known as interventional policy [Ortega et al., 2021].
The policy in Equation (5) is therefore known as interventional policy [Ortega et al., 2021].
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 0, 9 ], "sentence-2-token-indices": [ 0, 10 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 13, 21 ], "sentence-2-token-indices": [ 14, 22 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00086
2
3
2
3
87
2
Section
Y6LzLWXlS8E_08_00_00
8
null
null
null
null
Deconfounding imitation learning
{ "0": { "type": "Deletion", "sentence-1-token-indices": [ 0, 32 ], "sentence-2-token-indices": null, "intention": "Content" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00087
null
null
null
null
88
2
Figure
Y6LzLWXlS8E_08_00_00
8
2
Figure
WNev_iSes_07_03_00
7
[Figure]
[Figure]
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00088
0
0
3
0
90
2
Paragraph
Y6LzLWXlS8E_08_01_00
8
3
Paragraph
WNev_iSes_08_00_00
8
We now present our theoretical results on how imitation learning can be deconfounded.
We now present our theoretical results on how imitation learning can be deconfounded.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00090
1
0
0
0
91
2
Paragraph
Y6LzLWXlS8E_08_01_01
8
3
Paragraph
WNev_iSes_08_00_01
8
We first showthat the interventional policy is optimal in some sense, before analyzing in which settings it can belearned.
We first show that the interventional policy is optimal in some sense, before analyzing in which settings it can be learned.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 9, 17 ], "sentence-2-token-indices": [ 9, 18 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 112, 122 ], "sentence-2-token-indices": [ 113, 124 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00091
1
1
0
1
92
2
Section
Y6LzLWXlS8E_09_00_00
9
3
Section
WNev_iSes_09_00_00
9
3.1 Optimality of the interventional policy
3.1 Optimality of the interventional policy
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00092
null
null
null
null
93
2
Figure
Y6LzLWXlS8E_09_00_00
9
3
Paragraph
WNev_iSes_09_00_00
9
[Figure] Under some reasonable assumptions, the interventional policy approaches the expert’s policy, as weprove in the appendix 2. [Figure]
Under some reasonable assumptions, the interventional policy approaches the expert’s policy, as we prove in the Appendix B.
{ "0": { "type": "Deletion", "sentence-1-token-indices": [ 0, 8 ], "sentence-2-token-indices": null, "intention": "Format" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 105, 112 ], "sentence-2-token-indices": [ 96, 104 ], "intention": "Improve-grammar-Typo" }, "2": { "type": "Substitute", "sentence-1-token-indices": [ 120, 140 ], "sentence-2-token-indices": [ 112, 123 ], "intention": "Format" }, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00093
0
0
0
0
94
3
Paragraph
Y6LzLWXlS8E_09_03_00
9
3
Paragraph
WNev_iSes_09_01_00
9
Theorem 3.1 (Informal) .
Theorem 3.1 (Informal) .
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00094
3
0
1
0
95
3
Paragraph
Y6LzLWXlS8E_09_03_01
9
3
Paragraph
WNev_iSes_09_01_01
9
If the interventional inference p int ( θ | τ <t ) approaches the true latent of theenvironment as t → ∞ on the rollouts of π int , and if the expert maximises some reward that is fixedacross all environments, then as t → ∞ , the imitator policy π int ( a t | s ) approaches the expert policy.
If the interventional inference p int ( θ | τ <t ) approaches the true latent of the environment as t → ∞ on the rollouts of π int , and if the expert maximises some reward that is fixed across all environments, then as t → ∞ , the imitator policy π int ( a t | s ) approaches the expert policy.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 81, 95 ], "sentence-2-token-indices": [ 81, 96 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 180, 191 ], "sentence-2-token-indices": [ 181, 193 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00095
3
1
1
1
96
3
Paragraph
Y6LzLWXlS8E_09_04_00
9
3
Paragraph
WNev_iSes_09_02_00
9
Proof.
Proof.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00096
4
0
2
0
97
3
Paragraph
Y6LzLWXlS8E_09_04_01
9
3
Paragraph
WNev_iSes_09_02_01
9
See lemma 2.1 in the appendix.
See Lemma 2.1 in the appendix.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 4, 9 ], "sentence-2-token-indices": [ 4, 9 ], "intention": "Improve-grammar-Typo" }, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00097
4
1
2
1
98
3
Paragraph
Y6LzLWXlS8E_09_05_00
9
3
Paragraph
WNev_iSes_09_03_00
9
The requirement here means that the transition dynamics must be informative about the latent —we consider latent confounders that manifest in the dynamics, not those that affect only the agentbehavior.
The requirement here means that the transition dynamics must be informative about the latent — we consider latent confounders that manifest in the dynamics, not those that affect only the agent behavior.
{ "0": { "type": "Substitute", "sentence-1-token-indices": [ 93, 96 ], "sentence-2-token-indices": [ 93, 97 ], "intention": "Improve-grammar-Typo" }, "1": { "type": "Substitute", "sentence-1-token-indices": [ 187, 201 ], "sentence-2-token-indices": [ 188, 203 ], "intention": "Improve-grammar-Typo" }, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00098
5
0
3
0
99
3
Paragraph
Y6LzLWXlS8E_09_05_01
9
3
Paragraph
WNev_iSes_09_03_01
9
In this case, the interventional policy thus presents a solution to the confounding problem.
In this case, the interventional policy thus presents a solution to the confounding problem.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00099
5
1
3
1
100
3
Paragraph
Y6LzLWXlS8E_09_06_00
9
3
Paragraph
WNev_iSes_09_04_00
9
In the rest of this paper we focus on the question if and how it can be learned from data.
In the rest of this paper we focus on the question if and how it can be learned from data.
{ "0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null, "10": null }
Y6LzLWXlS8E
WNev_iSes
Y6LzLWXlS8E.WNev_iSes.00100
6
0
4
0
End of preview.