Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
900 | def box2corners(box):
B = box.shape[0]
x, y, w, h, alpha = paddle.split(box, 5, axis=-1)
x4 = paddle.to_tensor(
[0.5, 0.5, -0.5, -0.5], dtype=paddle.float32).reshape(
(1, 1, 4)) # (1,1,4)
x4 = x4 * w # (B, N, 4)
y4 = paddle.to_tensor(
[-0.5, 0.5, 0.5, -0.5], dtype=paddle.float32).reshape((1, 1, 4))
y4 = y4 * h # (B, N, 4)
corners = paddle.stack([x4, y4], axis=-1) # (B, N, 4, 2)
sin = paddle.sin(alpha)
cos = paddle.cos(alpha)
row1 = paddle.concat([cos, sin], axis=-1)
row2 = paddle.concat([-sin, cos], axis=-1) # (B, N, 2)
rot_T = paddle.stack([row1, row2], axis=-2) # (B, N, 2, 2)
rotated = paddle.bmm(corners.reshape([-1, 4, 2]), rot_T.reshape([-1, 2, 2]))
rotated = rotated.reshape([B, -1, 4, 2]) # (B*N, 4, 2) -> (B, N, 4, 2)
rotated[..., 0] += x
rotated[..., 1] += y
return rotated
| convert box coordinate to corners
Args:
box (Tensor): (B, N, 5) with (x, y, w, h, alpha) angle is in [0, 90)
Returns:
corners (Tensor): (B, N, 4, 2) with (x1, y1, x2, y2, x3, y3, x4, y4)
| 38 | 128 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def box2corners(box):
B = box.shape[0]
x, y, w, h, alpha = paddle.split(box, 5, axis=-1)
x4 = paddle.to_tensor(
[0.5, 0.5, -0.5, -0.5], dtype=paddle.float32).reshape(
(1, 1, 4)) # (1,1,4)
x4 = x4 * w # (B, N, 4)
y4 = paddle.to_tensor(
[-0.5, 0.5, 0.5, -0.5], dtype=paddle.float32).reshape((1, 1, 4))
y4 = y4 * h # (B, N, 4)
corners = paddle.stack([x4, y4], axis=-1) # (B, N, 4, 2)
sin = paddle.sin(alpha)
cos = paddle.cos(alpha)
row1 = paddle.concat([cos, sin], axis=-1)
row2 = paddle.concat([-sin, cos], axis=-1) # (B, N, 2)
rot_T = paddle.stack([row1, row2], axis=-2) # (B, N, 2, 2)
rotated = paddle.bmm(corners.reshape([-1, 4, 2]), rot_T.reshape([-1, 2, 2]))
rotated = rotated.reshape([B, -1, 4, 2]) # (B*N, 4, 2) -> (B, N, 4, 2)
rotated[..., 0] += x
rotated[..., 1] += y
return rotated
```
###Assistant : convert box coordinate to corners
Args:
box (Tensor): (B, N, 5) with (x, y, w, h, alpha) angle is in [0, 90)
Returns:
corners (Tensor): (B, N, 4, 2) with (x1, y1, x2, y2, x3, y3, x4, y4)
|
901 | def get_crash_rate_alert_metrics_aggregation_value(self, subscription_update):
rows = subscription_update["values"]["data"]
if BaseMetricsEntitySubscription.is_crash_rate_format_v2(rows):
version = "v2"
result = self._get_crash_rate_alert_metrics_aggregation_value_v2(subscription_update)
else:
version = "v1"
result = self._get_crash_rate_alert_metrics_aggregation_value_v1(subscription_update)
metrics.incr(
"incidents.alert_rules.get_crash_rate_alert_metrics_aggregation_value",
tags={"format": version},
sample_rate=1.0,
)
return result
| Handle both update formats. Once all subscriptions have been updated
to v2, we can remove v1 and replace this function with current v2.
| 23 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_crash_rate_alert_metrics_aggregation_value(self, subscription_update):
rows = subscription_update["values"]["data"]
if BaseMetricsEntitySubscription.is_crash_rate_format_v2(rows):
version = "v2"
result = self._get_crash_rate_alert_metrics_aggregation_value_v2(subscription_update)
else:
version = "v1"
result = self._get_crash_rate_alert_metrics_aggregation_value_v1(subscription_update)
metrics.incr(
"incidents.alert_rules.get_crash_rate_alert_metrics_aggregation_value",
tags={"format": version},
sample_rate=1.0,
)
return result
```
###Assistant : Handle both update formats. Once all subscriptions have been updated
to v2, we can remove v1 and replace this function with current v2.
|
902 | def strict_promotion_if_dtypes_match(dtypes):
if all(dtype == dtypes[0] for dtype in dtypes):
return jax.numpy_dtype_promotion('strict')
return jax.numpy_dtype_promotion('standard')
|
Context manager to enable strict promotion if all dtypes match,
and enable standard dtype promotion otherwise.
| 16 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def strict_promotion_if_dtypes_match(dtypes):
if all(dtype == dtypes[0] for dtype in dtypes):
return jax.numpy_dtype_promotion('strict')
return jax.numpy_dtype_promotion('standard')
```
###Assistant :
Context manager to enable strict promotion if all dtypes match,
and enable standard dtype promotion otherwise.
|
903 | def test_stroptions_deprecated_subset():
with pytest.raises(ValueError, match="deprecated options must be a subset"):
StrOptions({"a", "b", "c"}, deprecated={"a", "d"})
| Check that the deprecated parameter must be a subset of options. | 11 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_stroptions_deprecated_subset():
with pytest.raises(ValueError, match="deprecated options must be a subset"):
StrOptions({"a", "b", "c"}, deprecated={"a", "d"})
```
###Assistant : Check that the deprecated parameter must be a subset of options.
|
904 | def _validate_target_and_loss(self, y, loss):
# `self.loss` references the loss added via `compile` call. If users have
# provided such, the target must be provided; otherwise it's a user error.
# Note that `self.loss` does not include losses added via `add_loss`, and it
# is a valid use when such loss from `add_loss` exists and target does not.
if self.loss and y is None:
raise ValueError(
"Target data is missing. Your model was compiled with "
f"loss={self.loss}, "
"and therefore expects target data to be provided in `fit()`."
)
# For training, there must be compiled loss or regularization loss to exist
# in order to apply the gradients. If one is not found, it means no loss
# was supplied via `compile` or `add_loss`.
elif loss is None:
raise ValueError(
"No loss found. You may have forgotten to provide a `loss` argument "
"in the `compile()` method."
)
| Raises error if target or loss is not found.
This method verifies that the target and loss are properly populated
when applicable, or raises errors.
Args:
y: the target for training.
loss: the total loss tensor including loss added via `compile` and
`add_loss`.
| 43 | 148 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _validate_target_and_loss(self, y, loss):
# `self.loss` references the loss added via `compile` call. If users have
# provided such, the target must be provided; otherwise it's a user error.
# Note that `self.loss` does not include losses added via `add_loss`, and it
# is a valid use when such loss from `add_loss` exists and target does not.
if self.loss and y is None:
raise ValueError(
"Target data is missing. Your model was compiled with "
f"loss={self.loss}, "
"and therefore expects target data to be provided in `fit()`."
)
# For training, there must be compiled loss or regularization loss to exist
# in order to apply the gradients. If one is not found, it means no loss
# was supplied via `compile` or `add_loss`.
elif loss is None:
raise ValueError(
"No loss found. You may have forgotten to provide a `loss` argument "
"in the `compile()` method."
)
```
###Assistant : Raises error if target or loss is not found.
This method verifies that the target and loss are properly populated
when applicable, or raises errors.
Args:
y: the target for training.
loss: the total loss tensor including loss added via `compile` and
`add_loss`.
|
905 | def _validate_datetimelike_monotonic(self):
# GH 46061
if self._on.hasnans:
self._raise_monotonic_error("values must not have NaT")
for group_indices in self._grouper.indices.values():
group_on = self._on.take(group_indices)
if not (
group_on.is_monotonic_increasing or group_on.is_monotonic_decreasing
):
on = "index" if self.on is None else self.on
raise ValueError(
f"Each group within {on} must be monotonic. "
f"Sort the values in {on} first."
)
|
Validate that each group in self._on is monotonic
| 8 | 52 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _validate_datetimelike_monotonic(self):
# GH 46061
if self._on.hasnans:
self._raise_monotonic_error("values must not have NaT")
for group_indices in self._grouper.indices.values():
group_on = self._on.take(group_indices)
if not (
group_on.is_monotonic_increasing or group_on.is_monotonic_decreasing
):
on = "index" if self.on is None else self.on
raise ValueError(
f"Each group within {on} must be monotonic. "
f"Sort the values in {on} first."
)
```
###Assistant :
Validate that each group in self._on is monotonic
|
906 | def framework_info(filename):
is_framework = STRICT_FRAMEWORK_RE.match(filename)
if not is_framework:
return None
return is_framework.groupdict()
|
A framework name can take one of the following four forms:
Location/Name.framework/Versions/SomeVersion/Name_Suffix
Location/Name.framework/Versions/SomeVersion/Name
Location/Name.framework/Name_Suffix
Location/Name.framework/Name
returns None if not found, or a mapping equivalent to:
dict(
location='Location',
name='Name.framework/Versions/SomeVersion/Name_Suffix',
shortname='Name',
version='SomeVersion',
suffix='Suffix',
)
Note that SomeVersion and Suffix are optional and may be None
if not present
| 46 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def framework_info(filename):
is_framework = STRICT_FRAMEWORK_RE.match(filename)
if not is_framework:
return None
return is_framework.groupdict()
```
###Assistant :
A framework name can take one of the following four forms:
Location/Name.framework/Versions/SomeVersion/Name_Suffix
Location/Name.framework/Versions/SomeVersion/Name
Location/Name.framework/Name_Suffix
Location/Name.framework/Name
returns None if not found, or a mapping equivalent to:
dict(
location='Location',
name='Name.framework/Versions/SomeVersion/Name_Suffix',
shortname='Name',
version='SomeVersion',
suffix='Suffix',
)
Note that SomeVersion and Suffix are optional and may be None
if not present
|
907 | def contained_in(filename, directory):
filename = os.path.normcase(os.path.abspath(filename))
directory = os.path.normcase(os.path.abspath(directory))
return os.path.commonprefix([filename, directory]) == directory
| Test if a file is located within the given directory. | 10 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def contained_in(filename, directory):
filename = os.path.normcase(os.path.abspath(filename))
directory = os.path.normcase(os.path.abspath(directory))
return os.path.commonprefix([filename, directory]) == directory
```
###Assistant : Test if a file is located within the given directory.
|
908 | def glu(x, axis=-1):
a, b = tf.split(x, 2, axis=axis)
return a * tf.math.sigmoid(b)
if version.parse(tf.version.VERSION) >= version.parse("2.4"):
|
Gated Linear Unit. Implementation as defined in the original paper (see https://arxiv.org/abs/1612.08083), where
the input `x` is split in two halves across a dimension (`axis`), A and B, returning A * sigmoid(B).
Args:
`x`: float Tensor to perform activation
`axis`: dimension across which `x` be split in half
Returns:
`x` with the GLU activation applied (with its size halved across the dimension `axis`).
| 63 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def glu(x, axis=-1):
a, b = tf.split(x, 2, axis=axis)
return a * tf.math.sigmoid(b)
if version.parse(tf.version.VERSION) >= version.parse("2.4"):
```
###Assistant :
Gated Linear Unit. Implementation as defined in the original paper (see https://arxiv.org/abs/1612.08083), where
the input `x` is split in two halves across a dimension (`axis`), A and B, returning A * sigmoid(B).
Args:
`x`: float Tensor to perform activation
`axis`: dimension across which `x` be split in half
Returns:
`x` with the GLU activation applied (with its size halved across the dimension `axis`).
|
909 | def matplot(self, plot, opts=None, env=None, win=None):
opts = {} if opts is None else opts
_title2str(opts)
_assert_opts(opts)
# write plot to SVG buffer:
buffer = StringIO()
plot.savefig(buffer, format="svg")
buffer.seek(0)
svg = buffer.read()
buffer.close()
if opts.get("resizable", False):
if not BS4_AVAILABLE:
raise ImportError("No module named 'bs4'")
else:
try:
soup = bs4.BeautifulSoup(svg, "xml")
except bs4.FeatureNotFound as e:
import six
six.raise_from(ImportError("No module named 'lxml'"), e)
height = soup.svg.attrs.pop("height", None)
width = soup.svg.attrs.pop("width", None)
svg = str(soup)
else:
height = None
width = None
# show SVG:
if "height" not in opts:
height = height or re.search(r'height\="([0-9\.]*)pt"', svg)
if height is not None:
if not isstr(height):
height = height.group(1)
height = height.replace("pt", "00")
opts["height"] = 1.4 * int(math.ceil(float(height)))
if "width" not in opts:
width = width or re.search(r'width\="([0-9\.]*)pt"', svg)
if width is not None:
if not isstr(width):
width = width.group(1)
width = width.replace("pt", "00")
opts["width"] = 1.35 * int(math.ceil(float(width)))
return self.svg(svgstr=svg, opts=opts, env=env, win=win)
|
This function draws a Matplotlib `plot`. The function supports
one plot-specific option: `resizable`. When set to `True` the plot
is resized with the pane. You need `beautifulsoup4` and `lxml`
packages installed to use this option.
| 35 | 149 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def matplot(self, plot, opts=None, env=None, win=None):
opts = {} if opts is None else opts
_title2str(opts)
_assert_opts(opts)
# write plot to SVG buffer:
buffer = StringIO()
plot.savefig(buffer, format="svg")
buffer.seek(0)
svg = buffer.read()
buffer.close()
if opts.get("resizable", False):
if not BS4_AVAILABLE:
raise ImportError("No module named 'bs4'")
else:
try:
soup = bs4.BeautifulSoup(svg, "xml")
except bs4.FeatureNotFound as e:
import six
six.raise_from(ImportError("No module named 'lxml'"), e)
height = soup.svg.attrs.pop("height", None)
width = soup.svg.attrs.pop("width", None)
svg = str(soup)
else:
height = None
width = None
# show SVG:
if "height" not in opts:
height = height or re.search(r'height\="([0-9\.]*)pt"', svg)
if height is not None:
if not isstr(height):
height = height.group(1)
height = height.replace("pt", "00")
opts["height"] = 1.4 * int(math.ceil(float(height)))
if "width" not in opts:
width = width or re.search(r'width\="([0-9\.]*)pt"', svg)
if width is not None:
if not isstr(width):
width = width.group(1)
width = width.replace("pt", "00")
opts["width"] = 1.35 * int(math.ceil(float(width)))
return self.svg(svgstr=svg, opts=opts, env=env, win=win)
```
###Assistant :
This function draws a Matplotlib `plot`. The function supports
one plot-specific option: `resizable`. When set to `True` the plot
is resized with the pane. You need `beautifulsoup4` and `lxml`
packages installed to use this option.
|
910 | def acl_clone(consul_url=None, token=None, **kwargs):
ret = {}
data = {}
if not consul_url:
consul_url = _get_config()
if not consul_url:
log.error("No Consul URL found.")
ret["message"] = "No Consul URL found."
ret["res"] = False
return ret
if "id" not in kwargs:
ret["message"] = 'Required parameter "id" is missing.'
ret["res"] = False
return ret
function = "acl/clone/{}".format(kwargs["id"])
res = _query(
consul_url=consul_url, token=token, data=data, method="PUT", function=function
)
if res["res"]:
ret["res"] = True
ret["message"] = "ACL {} cloned.".format(kwargs["name"])
ret["ID"] = res["data"]
else:
ret["res"] = False
ret["message"] = "Cloning ACL item {} failed.".format(kwargs["name"])
return ret
|
Information about an ACL token.
:param consul_url: The Consul server URL.
:param id: Unique identifier for the ACL to update.
:return: Boolean, message of success or
failure, and new ID of cloned ACL.
CLI Example:
.. code-block:: bash
salt '*' consul.acl_info id='c1c4d223-91cb-3d1f-1ee8-f2af9e7b6716'
| 42 | 89 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def acl_clone(consul_url=None, token=None, **kwargs):
ret = {}
data = {}
if not consul_url:
consul_url = _get_config()
if not consul_url:
log.error("No Consul URL found.")
ret["message"] = "No Consul URL found."
ret["res"] = False
return ret
if "id" not in kwargs:
ret["message"] = 'Required parameter "id" is missing.'
ret["res"] = False
return ret
function = "acl/clone/{}".format(kwargs["id"])
res = _query(
consul_url=consul_url, token=token, data=data, method="PUT", function=function
)
if res["res"]:
ret["res"] = True
ret["message"] = "ACL {} cloned.".format(kwargs["name"])
ret["ID"] = res["data"]
else:
ret["res"] = False
ret["message"] = "Cloning ACL item {} failed.".format(kwargs["name"])
return ret
```
###Assistant :
Information about an ACL token.
:param consul_url: The Consul server URL.
:param id: Unique identifier for the ACL to update.
:return: Boolean, message of success or
failure, and new ID of cloned ACL.
CLI Example:
.. code-block:: bash
salt '*' consul.acl_info id='c1c4d223-91cb-3d1f-1ee8-f2af9e7b6716'
|
911 | def slice_indexer(self, start=None, end=None, step=None, kind=lib.no_default):
self._deprecated_arg(kind, "kind", "slice_indexer")
# For historical reasons DatetimeIndex supports slices between two
# instances of datetime.time as if it were applying a slice mask to
# an array of (self.hour, self.minute, self.seconds, self.microsecond).
if isinstance(start, time) and isinstance(end, time):
if step is not None and step != 1:
raise ValueError("Must have step size of 1 with time slices")
return self.indexer_between_time(start, end)
if isinstance(start, time) or isinstance(end, time):
raise KeyError("Cannot mix time and non-time slice keys")
|
Return indexer for specified label slice.
Index.slice_indexer, customized to handle time slicing.
In addition to functionality provided by Index.slice_indexer, does the
following:
- if both `start` and `end` are instances of `datetime.time`, it
invokes `indexer_between_time`
- if `start` and `end` are both either string or None perform
value-based selection in non-monotonic cases.
| 52 | 81 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def slice_indexer(self, start=None, end=None, step=None, kind=lib.no_default):
self._deprecated_arg(kind, "kind", "slice_indexer")
# For historical reasons DatetimeIndex supports slices between two
# instances of datetime.time as if it were applying a slice mask to
# an array of (self.hour, self.minute, self.seconds, self.microsecond).
if isinstance(start, time) and isinstance(end, time):
if step is not None and step != 1:
raise ValueError("Must have step size of 1 with time slices")
return self.indexer_between_time(start, end)
if isinstance(start, time) or isinstance(end, time):
raise KeyError("Cannot mix time and non-time slice keys")
```
###Assistant :
Return indexer for specified label slice.
Index.slice_indexer, customized to handle time slicing.
In addition to functionality provided by Index.slice_indexer, does the
following:
- if both `start` and `end` are instances of `datetime.time`, it
invokes `indexer_between_time`
- if `start` and `end` are both either string or None perform
value-based selection in non-monotonic cases.
|
912 | def fix_old_dry_orders(engine):
with engine.begin() as connection:
connection.execute(
text(
)
)
connection.execute(
text(
)
)
|
update orders
set ft_is_open = 0
where ft_is_open = 1 and (ft_trade_id, order_id) not in (
select id, stoploss_order_id from trades where stoploss_order_id is not null
) and ft_order_side = 'stoploss'
and order_id like 'dry_%'
update orders
set ft_is_open = 0
where ft_is_open = 1
and (ft_trade_id, order_id) not in (
select id, open_order_id from trades where open_order_id is not null
) and ft_order_side != 'stoploss'
and order_id like 'dry_%'
| 70 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def fix_old_dry_orders(engine):
with engine.begin() as connection:
connection.execute(
text(
)
)
connection.execute(
text(
)
)
```
###Assistant :
update orders
set ft_is_open = 0
where ft_is_open = 1 and (ft_trade_id, order_id) not in (
select id, stoploss_order_id from trades where stoploss_order_id is not null
) and ft_order_side = 'stoploss'
and order_id like 'dry_%'
update orders
set ft_is_open = 0
where ft_is_open = 1
and (ft_trade_id, order_id) not in (
select id, open_order_id from trades where open_order_id is not null
) and ft_order_side != 'stoploss'
and order_id like 'dry_%'
|
913 | def build_data_frame(self, data, flags=None, stream_id=1, padding_len=0):
flags = set(flags) if flags is not None else set()
f = DataFrame(stream_id)
f.data = data
f.flags = flags
if padding_len:
flags.add("PADDED")
f.pad_length = padding_len
return f
|
Builds a single data frame out of a chunk of data.
| 11 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def build_data_frame(self, data, flags=None, stream_id=1, padding_len=0):
flags = set(flags) if flags is not None else set()
f = DataFrame(stream_id)
f.data = data
f.flags = flags
if padding_len:
flags.add("PADDED")
f.pad_length = padding_len
return f
```
###Assistant :
Builds a single data frame out of a chunk of data.
|
914 | def set_dryrun_parser(parser=None):
if not parser:
parser = set_base_parser()
parser.add_argument(
'host',
type=str,
help='The full host address of the Gateway, e.g. grpc://localhost:12345',
)
parser.add_argument(
'--timeout',
type=int,
default=3000,
help=,
)
return parser
| Set the parser for `dryrun`
:param parser: an existing parser to build upon
:return: the parser
Timeout in millisecond of one check
-1 for waiting forever
| 26 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_dryrun_parser(parser=None):
if not parser:
parser = set_base_parser()
parser.add_argument(
'host',
type=str,
help='The full host address of the Gateway, e.g. grpc://localhost:12345',
)
parser.add_argument(
'--timeout',
type=int,
default=3000,
help=,
)
return parser
```
###Assistant : Set the parser for `dryrun`
:param parser: an existing parser to build upon
:return: the parser
Timeout in millisecond of one check
-1 for waiting forever
|
915 | def wheel_dist_info_dir(source, name):
# type: (ZipFile, str) -> str
# Zip file path separators must be /
subdirs = {p.split("/", 1)[0] for p in source.namelist()}
info_dirs = [s for s in subdirs if s.endswith(".dist-info")]
if not info_dirs:
raise UnsupportedWheel(".dist-info directory not found")
if len(info_dirs) > 1:
raise UnsupportedWheel(
"multiple .dist-info directories found: {}".format(", ".join(info_dirs))
)
info_dir = info_dirs[0]
info_dir_name = canonicalize_name(info_dir)
canonical_name = canonicalize_name(name)
if not info_dir_name.startswith(canonical_name):
raise UnsupportedWheel(
".dist-info directory {!r} does not start with {!r}".format(
info_dir, canonical_name
)
)
return info_dir
| Returns the name of the contained .dist-info directory.
Raises AssertionError or UnsupportedWheel if not found, >1 found, or
it doesn't match the provided name.
| 24 | 83 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def wheel_dist_info_dir(source, name):
# type: (ZipFile, str) -> str
# Zip file path separators must be /
subdirs = {p.split("/", 1)[0] for p in source.namelist()}
info_dirs = [s for s in subdirs if s.endswith(".dist-info")]
if not info_dirs:
raise UnsupportedWheel(".dist-info directory not found")
if len(info_dirs) > 1:
raise UnsupportedWheel(
"multiple .dist-info directories found: {}".format(", ".join(info_dirs))
)
info_dir = info_dirs[0]
info_dir_name = canonicalize_name(info_dir)
canonical_name = canonicalize_name(name)
if not info_dir_name.startswith(canonical_name):
raise UnsupportedWheel(
".dist-info directory {!r} does not start with {!r}".format(
info_dir, canonical_name
)
)
return info_dir
```
###Assistant : Returns the name of the contained .dist-info directory.
Raises AssertionError or UnsupportedWheel if not found, >1 found, or
it doesn't match the provided name.
|
916 | def in1d(ar1, ar2, assume_unique=False, invert=False):
# Ravel both arrays, behavior for the first array could be different
ar1 = np.asarray(ar1).ravel()
ar2 = np.asarray(ar2).ravel()
# Ensure that iteration through object arrays yields size-1 arrays
if ar2.dtype == object:
ar2 = ar2.reshape(-1, 1)
# Check if we can use a fast integer algorithm:
integer_arrays = (np.issubdtype(ar1.dtype, np.integer) and
np.issubdtype(ar2.dtype, np.integer))
if integer_arrays:
ar2_min = np.min(ar2)
ar2_max = np.max(ar2)
ar2_range = ar2_max - ar2_min
ar2_size = ar2.size
# Optimal performance is for approximately
# log10(size) > (log10(range) - 2.27) / 0.927, see discussion on
# https://github.com/numpy/numpy/pull/12065
optimal_parameters = (
np.log10(ar2_size + 1) >
((np.log10(ar2_range + 1) - 2.27) / 0.927)
)
if optimal_parameters:
if invert:
outgoing_array = np.ones_like(ar1, dtype=np.bool_)
else:
outgoing_array = np.zeros_like(ar1, dtype=np.bool_)
# Make elements 1 where the integer exists in ar2
if invert:
isin_helper_ar = np.ones(ar2_range + 1, dtype=np.bool_)
isin_helper_ar[ar2 - ar2_min] = 0
else:
isin_helper_ar = np.zeros(ar2_range + 1, dtype=np.bool_)
isin_helper_ar[ar2 - ar2_min] = 1
# Mask out elements we know won't work
basic_mask = (ar1 <= ar2_max) & (ar1 >= ar2_min)
outgoing_array[basic_mask] = isin_helper_ar[ar1[basic_mask] -
ar2_min]
return outgoing_array
# Check if one of the arrays may contain arbitrary objects
contains_object = ar1.dtype.hasobject or ar2.dtype.hasobject
# This code is run when
# a) the first condition is true, making the code significantly faster
# b) the second condition is true (i.e. `ar1` or `ar2` may contain
# arbitrary objects), since then sorting is not guaranteed to work
if len(ar2) < 10 * len(ar1) ** 0.145 or contains_object:
if invert:
mask = np.ones(len(ar1), dtype=bool)
for a in ar2:
mask &= (ar1 != a)
else:
mask = np.zeros(len(ar1), dtype=bool)
for a in ar2:
mask |= (ar1 == a)
return mask
# Otherwise use sorting
if not assume_unique:
ar1, rev_idx = np.unique(ar1, return_inverse=True)
ar2 = np.unique(ar2)
ar = np.concatenate((ar1, ar2))
# We need this to be a stable sort, so always use 'mergesort'
# here. The values from the first array should always come before
# the values from the second array.
order = ar.argsort(kind='mergesort')
sar = ar[order]
if invert:
bool_ar = (sar[1:] != sar[:-1])
else:
bool_ar = (sar[1:] == sar[:-1])
flag = np.concatenate((bool_ar, [invert]))
ret = np.empty(ar.shape, dtype=bool)
ret[order] = flag
if assume_unique:
return ret[:len(ar1)]
else:
return ret[rev_idx]
|
Test whether each element of a 1-D array is also present in a second array.
Returns a boolean array the same length as `ar1` that is True
where an element of `ar1` is in `ar2` and False otherwise.
We recommend using :func:`isin` instead of `in1d` for new code.
Parameters
----------
ar1 : (M,) array_like
Input array.
ar2 : array_like
The values against which to test each value of `ar1`.
assume_unique : bool, optional
If True, the input arrays are both assumed to be unique, which
can speed up the calculation. Default is False.
invert : bool, optional
If True, the values in the returned array are inverted (that is,
False where an element of `ar1` is in `ar2` and True otherwise).
Default is False. ``np.in1d(a, b, invert=True)`` is equivalent
to (but is faster than) ``np.invert(in1d(a, b))``.
.. versionadded:: 1.8.0
Returns
-------
in1d : (M,) ndarray, bool
The values `ar1[in1d]` are in `ar2`.
See Also
--------
isin : Version of this function that preserves the
shape of ar1.
numpy.lib.arraysetops : Module with a number of other functions for
performing set operations on arrays.
Notes
-----
`in1d` can be considered as an element-wise function version of the
python keyword `in`, for 1-D sequences. ``in1d(a, b)`` is roughly
equivalent to ``np.array([item in b for item in a])``.
However, this idea fails if `ar2` is a set, or similar (non-sequence)
container: As ``ar2`` is converted to an array, in those cases
``asarray(ar2)`` is an object array rather than the expected array of
contained values.
.. versionadded:: 1.4.0
Examples
--------
>>> test = np.array([0, 1, 2, 5, 0])
>>> states = [0, 2]
>>> mask = np.in1d(test, states)
>>> mask
array([ True, False, True, False, True])
>>> test[mask]
array([0, 2, 0])
>>> mask = np.in1d(test, states, invert=True)
>>> mask
array([False, True, False, True, False])
>>> test[mask]
array([1, 5])
| 303 | 367 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def in1d(ar1, ar2, assume_unique=False, invert=False):
# Ravel both arrays, behavior for the first array could be different
ar1 = np.asarray(ar1).ravel()
ar2 = np.asarray(ar2).ravel()
# Ensure that iteration through object arrays yields size-1 arrays
if ar2.dtype == object:
ar2 = ar2.reshape(-1, 1)
# Check if we can use a fast integer algorithm:
integer_arrays = (np.issubdtype(ar1.dtype, np.integer) and
np.issubdtype(ar2.dtype, np.integer))
if integer_arrays:
ar2_min = np.min(ar2)
ar2_max = np.max(ar2)
ar2_range = ar2_max - ar2_min
ar2_size = ar2.size
# Optimal performance is for approximately
# log10(size) > (log10(range) - 2.27) / 0.927, see discussion on
# https://github.com/numpy/numpy/pull/12065
optimal_parameters = (
np.log10(ar2_size + 1) >
((np.log10(ar2_range + 1) - 2.27) / 0.927)
)
if optimal_parameters:
if invert:
outgoing_array = np.ones_like(ar1, dtype=np.bool_)
else:
outgoing_array = np.zeros_like(ar1, dtype=np.bool_)
# Make elements 1 where the integer exists in ar2
if invert:
isin_helper_ar = np.ones(ar2_range + 1, dtype=np.bool_)
isin_helper_ar[ar2 - ar2_min] = 0
else:
isin_helper_ar = np.zeros(ar2_range + 1, dtype=np.bool_)
isin_helper_ar[ar2 - ar2_min] = 1
# Mask out elements we know won't work
basic_mask = (ar1 <= ar2_max) & (ar1 >= ar2_min)
outgoing_array[basic_mask] = isin_helper_ar[ar1[basic_mask] -
ar2_min]
return outgoing_array
# Check if one of the arrays may contain arbitrary objects
contains_object = ar1.dtype.hasobject or ar2.dtype.hasobject
# This code is run when
# a) the first condition is true, making the code significantly faster
# b) the second condition is true (i.e. `ar1` or `ar2` may contain
# arbitrary objects), since then sorting is not guaranteed to work
if len(ar2) < 10 * len(ar1) ** 0.145 or contains_object:
if invert:
mask = np.ones(len(ar1), dtype=bool)
for a in ar2:
mask &= (ar1 != a)
else:
mask = np.zeros(len(ar1), dtype=bool)
for a in ar2:
mask |= (ar1 == a)
return mask
# Otherwise use sorting
if not assume_unique:
ar1, rev_idx = np.unique(ar1, return_inverse=True)
ar2 = np.unique(ar2)
ar = np.concatenate((ar1, ar2))
# We need this to be a stable sort, so always use 'mergesort'
# here. The values from the first array should always come before
# the values from the second array.
order = ar.argsort(kind='mergesort')
sar = ar[order]
if invert:
bool_ar = (sar[1:] != sar[:-1])
else:
bool_ar = (sar[1:] == sar[:-1])
flag = np.concatenate((bool_ar, [invert]))
ret = np.empty(ar.shape, dtype=bool)
ret[order] = flag
if assume_unique:
return ret[:len(ar1)]
else:
return ret[rev_idx]
```
###Assistant :
Test whether each element of a 1-D array is also present in a second array.
Returns a boolean array the same length as `ar1` that is True
where an element of `ar1` is in `ar2` and False otherwise.
We recommend using :func:`isin` instead of `in1d` for new code.
Parameters
----------
ar1 : (M,) array_like
Input array.
ar2 : array_like
The values against which to test each value of `ar1`.
assume_unique : bool, optional
If True, the input arrays are both assumed to be unique, which
can speed up the calculation. Default is False.
invert : bool, optional
If True, the values in the returned array are inverted (that is,
False where an element of `ar1` is in `ar2` and True otherwise).
Default is False. ``np.in1d(a, b, invert=True)`` is equivalent
to (but is faster than) ``np.invert(in1d(a, b))``.
.. versionadded:: 1.8.0
Returns
-------
in1d : (M,) ndarray, bool
The values `ar1[in1d]` are in `ar2`.
See Also
--------
isin : Version of this function that preserves the
shape of ar1.
numpy.lib.arraysetops : Module with a number of other functions for
performing set operations on arrays.
Notes
-----
`in1d` can be considered as an element-wise function version of the
python keyword `in`, for 1-D sequences. ``in1d(a, b)`` is roughly
equivalent to ``np.array([item in b for item in a])``.
However, this idea fails if `ar2` is a set, or similar (non-sequence)
container: As ``ar2`` is converted to an array, in those cases
``asarray(ar2)`` is an object array rather than the expected array of
contained values.
.. versionadded:: 1.4.0
Examples
--------
>>> test = np.array([0, 1, 2, 5, 0])
>>> states = [0, 2]
>>> mask = np.in1d(test, states)
>>> mask
array([ True, False, True, False, True])
>>> test[mask]
array([0, 2, 0])
>>> mask = np.in1d(test, states, invert=True)
>>> mask
array([False, True, False, True, False])
>>> test[mask]
array([1, 5])
|
917 | def out_degree_centrality(G):
if len(G) <= 1:
return {n: 1 for n in G}
s = 1.0 / (len(G) - 1.0)
centrality = {n: d * s for n, d in G.out_degree()}
return centrality
| Compute the out-degree centrality for nodes.
The out-degree centrality for a node v is the fraction of nodes its
outgoing edges are connected to.
Parameters
----------
G : graph
A NetworkX graph
Returns
-------
nodes : dictionary
Dictionary of nodes with out-degree centrality as values.
Raises
------
NetworkXNotImplemented
If G is undirected.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> nx.out_degree_centrality(G)
{0: 1.0, 1: 0.6666666666666666, 2: 0.0, 3: 0.0}
See Also
--------
degree_centrality, in_degree_centrality
Notes
-----
The degree centrality values are normalized by dividing by the maximum
possible degree in a simple graph n-1 where n is the number of nodes in G.
For multigraphs or graphs with self loops the maximum degree might
be higher than n-1 and values of degree centrality greater than 1
are possible.
| 136 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def out_degree_centrality(G):
if len(G) <= 1:
return {n: 1 for n in G}
s = 1.0 / (len(G) - 1.0)
centrality = {n: d * s for n, d in G.out_degree()}
return centrality
```
###Assistant : Compute the out-degree centrality for nodes.
The out-degree centrality for a node v is the fraction of nodes its
outgoing edges are connected to.
Parameters
----------
G : graph
A NetworkX graph
Returns
-------
nodes : dictionary
Dictionary of nodes with out-degree centrality as values.
Raises
------
NetworkXNotImplemented
If G is undirected.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> nx.out_degree_centrality(G)
{0: 1.0, 1: 0.6666666666666666, 2: 0.0, 3: 0.0}
See Also
--------
degree_centrality, in_degree_centrality
Notes
-----
The degree centrality values are normalized by dividing by the maximum
possible degree in a simple graph n-1 where n is the number of nodes in G.
For multigraphs or graphs with self loops the maximum degree might
be higher than n-1 and values of degree centrality greater than 1
are possible.
|
918 | def _read_html(self, file_url):
with open(file_url.replace("file://", "").replace(" ", "")) as f:
return f.read()
if matplotlylib:
| Read and return the HTML contents from a file_url in the
form e.g. file:///Users/chriddyp/Repos/plotly.py/plotly-temp.html
| 14 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _read_html(self, file_url):
with open(file_url.replace("file://", "").replace(" ", "")) as f:
return f.read()
if matplotlylib:
```
###Assistant : Read and return the HTML contents from a file_url in the
form e.g. file:///Users/chriddyp/Repos/plotly.py/plotly-temp.html
|
919 | def set_omp_num_threads_if_unset() -> bool:
num_threads_from_env = os.environ.get("OMP_NUM_THREADS")
if num_threads_from_env is not None:
# No ops if it's set
return False
# If unset, try setting the correct CPU count assigned.
runtime_ctx = ray.get_runtime_context()
if runtime_ctx.worker.mode != ray._private.worker.WORKER_MODE:
# Non worker mode, no ops.
return False
num_assigned_cpus = runtime_ctx.get_assigned_resources().get("CPU")
if num_assigned_cpus is None:
# This is an actor task w/o any num_cpus specified, set it to 1
logger.debug(
"[ray] Forcing OMP_NUM_THREADS=1 to avoid performance "
"degradation with many workers (issue #6998). You can override this "
"by explicitly setting OMP_NUM_THREADS, or changing num_cpus."
)
num_assigned_cpus = 1
import math
# For num_cpu < 1: Set to 1.
# For num_cpus >= 1: Set to the floor of the actual assigned cpus.
omp_num_threads = max(math.floor(num_assigned_cpus), 1)
os.environ["OMP_NUM_THREADS"] = str(omp_num_threads)
return True
| Set the OMP_NUM_THREADS to default to num cpus assigned to the worker
This function sets the environment variable OMP_NUM_THREADS for the worker,
if the env is not previously set and it's running in worker (WORKER_MODE).
Returns True if OMP_NUM_THREADS is set in this function.
| 44 | 129 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_omp_num_threads_if_unset() -> bool:
num_threads_from_env = os.environ.get("OMP_NUM_THREADS")
if num_threads_from_env is not None:
# No ops if it's set
return False
# If unset, try setting the correct CPU count assigned.
runtime_ctx = ray.get_runtime_context()
if runtime_ctx.worker.mode != ray._private.worker.WORKER_MODE:
# Non worker mode, no ops.
return False
num_assigned_cpus = runtime_ctx.get_assigned_resources().get("CPU")
if num_assigned_cpus is None:
# This is an actor task w/o any num_cpus specified, set it to 1
logger.debug(
"[ray] Forcing OMP_NUM_THREADS=1 to avoid performance "
"degradation with many workers (issue #6998). You can override this "
"by explicitly setting OMP_NUM_THREADS, or changing num_cpus."
)
num_assigned_cpus = 1
import math
# For num_cpu < 1: Set to 1.
# For num_cpus >= 1: Set to the floor of the actual assigned cpus.
omp_num_threads = max(math.floor(num_assigned_cpus), 1)
os.environ["OMP_NUM_THREADS"] = str(omp_num_threads)
return True
```
###Assistant : Set the OMP_NUM_THREADS to default to num cpus assigned to the worker
This function sets the environment variable OMP_NUM_THREADS for the worker,
if the env is not previously set and it's running in worker (WORKER_MODE).
Returns True if OMP_NUM_THREADS is set in this function.
|
920 | async def test_binary_device_classes(hass, hk_driver):
entity_id = "binary_sensor.demo"
aid = 1
for device_class, (service, char, _) in BINARY_SENSOR_SERVICE_MAP.items():
hass.states.async_set(entity_id, STATE_OFF, {ATTR_DEVICE_CLASS: device_class})
await hass.async_block_till_done()
aid += 1
acc = BinarySensor(hass, hk_driver, "Binary Sensor", entity_id, aid, None)
assert acc.get_service(service).display_name == service
assert acc.char_detected.display_name == char
| Test if services and characteristics are assigned correctly. | 8 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_binary_device_classes(hass, hk_driver):
entity_id = "binary_sensor.demo"
aid = 1
for device_class, (service, char, _) in BINARY_SENSOR_SERVICE_MAP.items():
hass.states.async_set(entity_id, STATE_OFF, {ATTR_DEVICE_CLASS: device_class})
await hass.async_block_till_done()
aid += 1
acc = BinarySensor(hass, hk_driver, "Binary Sensor", entity_id, aid, None)
assert acc.get_service(service).display_name == service
assert acc.char_detected.display_name == char
```
###Assistant : Test if services and characteristics are assigned correctly.
|
921 | def Internaldate2tuple(resp):
mo = InternalDate.match(resp)
if not mo:
return None
mon = Mon2num[mo.group('mon')]
zonen = mo.group('zonen')
day = int(mo.group('day'))
year = int(mo.group('year'))
hour = int(mo.group('hour'))
min = int(mo.group('min'))
sec = int(mo.group('sec'))
zoneh = int(mo.group('zoneh'))
zonem = int(mo.group('zonem'))
# INTERNALDATE timezone must be subtracted to get UT
zone = (zoneh*60 + zonem)*60
if zonen == b'-':
zone = -zone
tt = (year, mon, day, hour, min, sec, -1, -1, -1)
utc = calendar.timegm(tt) - zone
return time.localtime(utc)
| Parse an IMAP4 INTERNALDATE string.
Return corresponding local time. The return value is a
time.struct_time tuple or None if the string has wrong format.
| 24 | 76 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def Internaldate2tuple(resp):
mo = InternalDate.match(resp)
if not mo:
return None
mon = Mon2num[mo.group('mon')]
zonen = mo.group('zonen')
day = int(mo.group('day'))
year = int(mo.group('year'))
hour = int(mo.group('hour'))
min = int(mo.group('min'))
sec = int(mo.group('sec'))
zoneh = int(mo.group('zoneh'))
zonem = int(mo.group('zonem'))
# INTERNALDATE timezone must be subtracted to get UT
zone = (zoneh*60 + zonem)*60
if zonen == b'-':
zone = -zone
tt = (year, mon, day, hour, min, sec, -1, -1, -1)
utc = calendar.timegm(tt) - zone
return time.localtime(utc)
```
###Assistant : Parse an IMAP4 INTERNALDATE string.
Return corresponding local time. The return value is a
time.struct_time tuple or None if the string has wrong format.
|
922 | def test_delete_get(self):
# Send request
response = self.client.get(
reverse("wagtaildocs:delete_multiple", args=(self.doc.id,))
)
# Check response
self.assertEqual(response.status_code, 405)
|
This tests that a GET request to the delete view returns a 405 "METHOD NOT ALLOWED" response
| 17 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_delete_get(self):
# Send request
response = self.client.get(
reverse("wagtaildocs:delete_multiple", args=(self.doc.id,))
)
# Check response
self.assertEqual(response.status_code, 405)
```
###Assistant :
This tests that a GET request to the delete view returns a 405 "METHOD NOT ALLOWED" response
|
923 | def test_stream_admin_remove_others_from_public_stream(self) -> None:
result = self.attempt_unsubscribe_of_principal(
query_count=15,
target_users=[self.example_user("cordelia")],
is_realm_admin=False,
is_stream_admin=True,
is_subbed=True,
invite_only=False,
target_users_subbed=True,
)
json = self.assert_json_success(result)
self.assert_length(json["removed"], 1)
self.assert_length(json["not_removed"], 0)
|
You can remove others from public streams you're a stream administrator of.
| 12 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_stream_admin_remove_others_from_public_stream(self) -> None:
result = self.attempt_unsubscribe_of_principal(
query_count=15,
target_users=[self.example_user("cordelia")],
is_realm_admin=False,
is_stream_admin=True,
is_subbed=True,
invite_only=False,
target_users_subbed=True,
)
json = self.assert_json_success(result)
self.assert_length(json["removed"], 1)
self.assert_length(json["not_removed"], 0)
```
###Assistant :
You can remove others from public streams you're a stream administrator of.
|
924 | def _mosaic_combine(self, loc, center_position_xy, img_shape_wh):
assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right')
if loc == 'top_left':
# index0 to top left part of image
x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \
max(center_position_xy[1] - img_shape_wh[1], 0), \
center_position_xy[0], \
center_position_xy[1]
crop_coord = img_shape_wh[0] - (x2 - x1), img_shape_wh[1] - (
y2 - y1), img_shape_wh[0], img_shape_wh[1]
elif loc == 'top_right':
# index1 to top right part of image
x1, y1, x2, y2 = center_position_xy[0], \
max(center_position_xy[1] - img_shape_wh[1], 0), \
min(center_position_xy[0] + img_shape_wh[0],
self.img_scale[1] * 2), \
center_position_xy[1]
crop_coord = 0, img_shape_wh[1] - (y2 - y1), min(
img_shape_wh[0], x2 - x1), img_shape_wh[1]
elif loc == 'bottom_left':
# index2 to bottom left part of image
x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \
center_position_xy[1], \
center_position_xy[0], \
min(self.img_scale[0] * 2, center_position_xy[1] +
img_shape_wh[1])
crop_coord = img_shape_wh[0] - (x2 - x1), 0, img_shape_wh[0], min(
y2 - y1, img_shape_wh[1])
else:
# index3 to bottom right part of image
x1, y1, x2, y2 = center_position_xy[0], \
center_position_xy[1], \
min(center_position_xy[0] + img_shape_wh[0],
self.img_scale[1] * 2), \
min(self.img_scale[0] * 2, center_position_xy[1] +
img_shape_wh[1])
crop_coord = 0, 0, min(img_shape_wh[0],
x2 - x1), min(y2 - y1, img_shape_wh[1])
paste_coord = x1, y1, x2, y2
return paste_coord, crop_coord
| Calculate global coordinate of mosaic image and local coordinate of
cropped sub-image.
Args:
loc (str): Index for the sub-image, loc in ('top_left',
'top_right', 'bottom_left', 'bottom_right').
center_position_xy (Sequence[float]): Mixing center for 4 images,
(x, y).
img_shape_wh (Sequence[int]): Width and height of sub-image
Returns:
tuple[tuple[float]]: Corresponding coordinate of pasting and
cropping
- paste_coord (tuple): paste corner coordinate in mosaic image.
- crop_coord (tuple): crop corner coordinate in mosaic image.
| 67 | 201 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _mosaic_combine(self, loc, center_position_xy, img_shape_wh):
assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right')
if loc == 'top_left':
# index0 to top left part of image
x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \
max(center_position_xy[1] - img_shape_wh[1], 0), \
center_position_xy[0], \
center_position_xy[1]
crop_coord = img_shape_wh[0] - (x2 - x1), img_shape_wh[1] - (
y2 - y1), img_shape_wh[0], img_shape_wh[1]
elif loc == 'top_right':
# index1 to top right part of image
x1, y1, x2, y2 = center_position_xy[0], \
max(center_position_xy[1] - img_shape_wh[1], 0), \
min(center_position_xy[0] + img_shape_wh[0],
self.img_scale[1] * 2), \
center_position_xy[1]
crop_coord = 0, img_shape_wh[1] - (y2 - y1), min(
img_shape_wh[0], x2 - x1), img_shape_wh[1]
elif loc == 'bottom_left':
# index2 to bottom left part of image
x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \
center_position_xy[1], \
center_position_xy[0], \
min(self.img_scale[0] * 2, center_position_xy[1] +
img_shape_wh[1])
crop_coord = img_shape_wh[0] - (x2 - x1), 0, img_shape_wh[0], min(
y2 - y1, img_shape_wh[1])
else:
# index3 to bottom right part of image
x1, y1, x2, y2 = center_position_xy[0], \
center_position_xy[1], \
min(center_position_xy[0] + img_shape_wh[0],
self.img_scale[1] * 2), \
min(self.img_scale[0] * 2, center_position_xy[1] +
img_shape_wh[1])
crop_coord = 0, 0, min(img_shape_wh[0],
x2 - x1), min(y2 - y1, img_shape_wh[1])
paste_coord = x1, y1, x2, y2
return paste_coord, crop_coord
```
###Assistant : Calculate global coordinate of mosaic image and local coordinate of
cropped sub-image.
Args:
loc (str): Index for the sub-image, loc in ('top_left',
'top_right', 'bottom_left', 'bottom_right').
center_position_xy (Sequence[float]): Mixing center for 4 images,
(x, y).
img_shape_wh (Sequence[int]): Width and height of sub-image
Returns:
tuple[tuple[float]]: Corresponding coordinate of pasting and
cropping
- paste_coord (tuple): paste corner coordinate in mosaic image.
- crop_coord (tuple): crop corner coordinate in mosaic image.
|
925 | def permute(self, perm, orientation='rows', direction='forward'):
r
from sympy.combinatorics import Permutation
# allow british variants and `columns`
if direction == 'forwards':
direction = 'forward'
if direction == 'backwards':
direction = 'backward'
if orientation == 'columns':
orientation = 'cols'
if direction not in ('forward', 'backward'):
raise TypeError("direction='{}' is an invalid kwarg. "
"Try 'forward' or 'backward'".format(direction))
if orientation not in ('rows', 'cols'):
raise TypeError("orientation='{}' is an invalid kwarg. "
"Try 'rows' or 'cols'".format(orientation))
if not isinstance(perm, (Permutation, Iterable)):
raise ValueError(
"{} must be a list, a list of lists, "
"or a SymPy permutation object.".format(perm))
# ensure all swaps are in range
max_index = self.rows if orientation == 'rows' else self.cols
if not all(0 <= t <= max_index for t in flatten(list(perm))):
raise IndexError("`swap` indices out of range.")
if perm and not isinstance(perm, Permutation) and \
isinstance(perm[0], Iterable):
if direction == 'forward':
perm = list(reversed(perm))
perm = Permutation(perm, size=max_index+1)
else:
perm = Permutation(perm, size=max_index+1)
if orientation == 'rows':
return self._eval_permute_rows(perm)
if orientation == 'cols':
return self._eval_permute_cols(perm)
| Permute the rows or columns of a matrix by the given list of
swaps.
Parameters
==========
perm : Permutation, list, or list of lists
A representation for the permutation.
If it is ``Permutation``, it is used directly with some
resizing with respect to the matrix size.
If it is specified as list of lists,
(e.g., ``[[0, 1], [0, 2]]``), then the permutation is formed
from applying the product of cycles. The direction how the
cyclic product is applied is described in below.
If it is specified as a list, the list should represent
an array form of a permutation. (e.g., ``[1, 2, 0]``) which
would would form the swapping function
`0 \mapsto 1, 1 \mapsto 2, 2\mapsto 0`.
orientation : 'rows', 'cols'
A flag to control whether to permute the rows or the columns
direction : 'forward', 'backward'
A flag to control whether to apply the permutations from
the start of the list first, or from the back of the list
first.
For example, if the permutation specification is
``[[0, 1], [0, 2]]``,
If the flag is set to ``'forward'``, the cycle would be
formed as `0 \mapsto 2, 2 \mapsto 1, 1 \mapsto 0`.
If the flag is set to ``'backward'``, the cycle would be
formed as `0 \mapsto 1, 1 \mapsto 2, 2 \mapsto 0`.
If the argument ``perm`` is not in a form of list of lists,
this flag takes no effect.
Examples
========
>>> from sympy import eye
>>> M = eye(3)
>>> M.permute([[0, 1], [0, 2]], orientation='rows', direction='forward')
Matrix([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])
>>> from sympy import eye
>>> M = eye(3)
>>> M.permute([[0, 1], [0, 2]], orientation='rows', direction='backward')
Matrix([
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]])
Notes
=====
If a bijective function
`\sigma : \mathbb{N}_0 \rightarrow \mathbb{N}_0` denotes the
permutation.
If the matrix `A` is the matrix to permute, represented as
a horizontal or a vertical stack of vectors:
.. math::
A =
\begin{bmatrix}
a_0 \\ a_1 \\ \vdots \\ a_{n-1}
\end{bmatrix} =
\begin{bmatrix}
\alpha_0 & \alpha_1 & \cdots & \alpha_{n-1}
\end{bmatrix}
If the matrix `B` is the result, the permutation of matrix rows
is defined as:
.. math::
B := \begin{bmatrix}
a_{\sigma(0)} \\ a_{\sigma(1)} \\ \vdots \\ a_{\sigma(n-1)}
\end{bmatrix}
And the permutation of matrix columns is defined as:
.. math::
B := \begin{bmatrix}
\alpha_{\sigma(0)} & \alpha_{\sigma(1)} &
\cdots & \alpha_{\sigma(n-1)}
\end{bmatrix}
| 395 | 164 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def permute(self, perm, orientation='rows', direction='forward'):
r
from sympy.combinatorics import Permutation
# allow british variants and `columns`
if direction == 'forwards':
direction = 'forward'
if direction == 'backwards':
direction = 'backward'
if orientation == 'columns':
orientation = 'cols'
if direction not in ('forward', 'backward'):
raise TypeError("direction='{}' is an invalid kwarg. "
"Try 'forward' or 'backward'".format(direction))
if orientation not in ('rows', 'cols'):
raise TypeError("orientation='{}' is an invalid kwarg. "
"Try 'rows' or 'cols'".format(orientation))
if not isinstance(perm, (Permutation, Iterable)):
raise ValueError(
"{} must be a list, a list of lists, "
"or a SymPy permutation object.".format(perm))
# ensure all swaps are in range
max_index = self.rows if orientation == 'rows' else self.cols
if not all(0 <= t <= max_index for t in flatten(list(perm))):
raise IndexError("`swap` indices out of range.")
if perm and not isinstance(perm, Permutation) and \
isinstance(perm[0], Iterable):
if direction == 'forward':
perm = list(reversed(perm))
perm = Permutation(perm, size=max_index+1)
else:
perm = Permutation(perm, size=max_index+1)
if orientation == 'rows':
return self._eval_permute_rows(perm)
if orientation == 'cols':
return self._eval_permute_cols(perm)
```
###Assistant : Permute the rows or columns of a matrix by the given list of
swaps.
Parameters
==========
perm : Permutation, list, or list of lists
A representation for the permutation.
If it is ``Permutation``, it is used directly with some
resizing with respect to the matrix size.
If it is specified as list of lists,
(e.g., ``[[0, 1], [0, 2]]``), then the permutation is formed
from applying the product of cycles. The direction how the
cyclic product is applied is described in below.
If it is specified as a list, the list should represent
an array form of a permutation. (e.g., ``[1, 2, 0]``) which
would would form the swapping function
`0 \mapsto 1, 1 \mapsto 2, 2\mapsto 0`.
orientation : 'rows', 'cols'
A flag to control whether to permute the rows or the columns
direction : 'forward', 'backward'
A flag to control whether to apply the permutations from
the start of the list first, or from the back of the list
first.
For example, if the permutation specification is
``[[0, 1], [0, 2]]``,
If the flag is set to ``'forward'``, the cycle would be
formed as `0 \mapsto 2, 2 \mapsto 1, 1 \mapsto 0`.
If the flag is set to ``'backward'``, the cycle would be
formed as `0 \mapsto 1, 1 \mapsto 2, 2 \mapsto 0`.
If the argument ``perm`` is not in a form of list of lists,
this flag takes no effect.
Examples
========
>>> from sympy import eye
>>> M = eye(3)
>>> M.permute([[0, 1], [0, 2]], orientation='rows', direction='forward')
Matrix([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])
>>> from sympy import eye
>>> M = eye(3)
>>> M.permute([[0, 1], [0, 2]], orientation='rows', direction='backward')
Matrix([
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]])
Notes
=====
If a bijective function
`\sigma : \mathbb{N}_0 \rightarrow \mathbb{N}_0` denotes the
permutation.
If the matrix `A` is the matrix to permute, represented as
a horizontal or a vertical stack of vectors:
.. math::
A =
\begin{bmatrix}
a_0 \\ a_1 \\ \vdots \\ a_{n-1}
\end{bmatrix} =
\begin{bmatrix}
\alpha_0 & \alpha_1 & \cdots & \alpha_{n-1}
\end{bmatrix}
If the matrix `B` is the result, the permutation of matrix rows
is defined as:
.. math::
B := \begin{bmatrix}
a_{\sigma(0)} \\ a_{\sigma(1)} \\ \vdots \\ a_{\sigma(n-1)}
\end{bmatrix}
And the permutation of matrix columns is defined as:
.. math::
B := \begin{bmatrix}
\alpha_{\sigma(0)} & \alpha_{\sigma(1)} &
\cdots & \alpha_{\sigma(n-1)}
\end{bmatrix}
|
926 | def apply(self, func, *args, **kwargs):
func(self, *args, **kwargs)
return self
|
Pass the grid to a user-supplied function and return self.
The `func` must accept an object of this type for its first
positional argument. Additional arguments are passed through.
The return value of `func` is ignored; this method returns self.
See the `pipe` method if you want the return value.
Added in v0.12.0.
| 53 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def apply(self, func, *args, **kwargs):
func(self, *args, **kwargs)
return self
```
###Assistant :
Pass the grid to a user-supplied function and return self.
The `func` must accept an object of this type for its first
positional argument. Additional arguments are passed through.
The return value of `func` is ignored; this method returns self.
See the `pipe` method if you want the return value.
Added in v0.12.0.
|
927 | def copy_m2m_relationships(obj1, obj2, fields, kwargs=None):
for field_name in fields:
if hasattr(obj1, field_name):
try:
field_obj = obj1._meta.get_field(field_name)
except FieldDoesNotExist:
continue
if isinstance(field_obj, ManyToManyField):
# Many to Many can be specified as field_name
src_field_value = getattr(obj1, field_name)
if kwargs and field_name in kwargs:
override_field_val = kwargs[field_name]
# TODO: Should we spike this our or just put the for loop inside the next if and make everything respect order?
if field_name == 'instance_groups':
# instance_groups are a list but we need to preserve the order
for ig_id in override_field_val:
getattr(obj2, field_name).add(ig_id)
continue
if isinstance(override_field_val, (set, list, QuerySet)):
getattr(obj2, field_name).add(*override_field_val)
continue
if override_field_val.__class__.__name__ == 'ManyRelatedManager':
src_field_value = override_field_val
dest_field = getattr(obj2, field_name)
dest_field.add(*list(src_field_value.all().values_list('id', flat=True)))
|
In-place operation.
Given two saved objects, copies related objects from obj1
to obj2 to field of same name, if field occurs in `fields`
| 23 | 110 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def copy_m2m_relationships(obj1, obj2, fields, kwargs=None):
for field_name in fields:
if hasattr(obj1, field_name):
try:
field_obj = obj1._meta.get_field(field_name)
except FieldDoesNotExist:
continue
if isinstance(field_obj, ManyToManyField):
# Many to Many can be specified as field_name
src_field_value = getattr(obj1, field_name)
if kwargs and field_name in kwargs:
override_field_val = kwargs[field_name]
# TODO: Should we spike this our or just put the for loop inside the next if and make everything respect order?
if field_name == 'instance_groups':
# instance_groups are a list but we need to preserve the order
for ig_id in override_field_val:
getattr(obj2, field_name).add(ig_id)
continue
if isinstance(override_field_val, (set, list, QuerySet)):
getattr(obj2, field_name).add(*override_field_val)
continue
if override_field_val.__class__.__name__ == 'ManyRelatedManager':
src_field_value = override_field_val
dest_field = getattr(obj2, field_name)
dest_field.add(*list(src_field_value.all().values_list('id', flat=True)))
```
###Assistant :
In-place operation.
Given two saved objects, copies related objects from obj1
to obj2 to field of same name, if field occurs in `fields`
|
928 | def load_from_pipeline(pipeline):
try:
import transformers
except ImportError:
raise ImportError(
"transformers not installed. Please try `pip install transformers`"
)
if not isinstance(pipeline, transformers.Pipeline):
raise ValueError("pipeline must be a transformers.Pipeline")
# Handle the different pipelines. The has_attr() checks to make sure the pipeline exists in the
# version of the transformers library that the user has installed.
if hasattr(transformers, "AudioClassificationPipeline") and isinstance(
pipeline, transformers.AudioClassificationPipeline
):
pipeline_info = {
"inputs": inputs.Audio(label="Input", source="microphone", type="filepath"),
"outputs": outputs.Label(label="Class", type="confidences"),
"preprocess": lambda i: {"inputs": i},
"postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
}
elif hasattr(transformers, "AutomaticSpeechRecognitionPipeline") and isinstance(
pipeline, transformers.AutomaticSpeechRecognitionPipeline
):
pipeline_info = {
"inputs": inputs.Audio(label="Input", source="microphone", type="filepath"),
"outputs": outputs.Textbox(label="Output"),
"preprocess": lambda i: {"inputs": i},
"postprocess": lambda r: r["text"],
}
elif hasattr(transformers, "FeatureExtractionPipeline") and isinstance(
pipeline, transformers.FeatureExtractionPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Dataframe(label="Output"),
"preprocess": lambda x: {"inputs": x},
"postprocess": lambda r: r[0],
}
elif hasattr(transformers, "FillMaskPipeline") and isinstance(
pipeline, transformers.FillMaskPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda x: {"inputs": x},
"postprocess": lambda r: {i["token_str"]: i["score"] for i in r},
}
elif hasattr(transformers, "ImageClassificationPipeline") and isinstance(
pipeline, transformers.ImageClassificationPipeline
):
pipeline_info = {
"inputs": inputs.Image(label="Input Image", type="filepath"),
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda i: {"images": i},
"postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
}
elif hasattr(transformers, "QuestionAnsweringPipeline") and isinstance(
pipeline, transformers.QuestionAnsweringPipeline
):
pipeline_info = {
"inputs": [
inputs.Textbox(label="Context", lines=7),
inputs.Textbox(label="Question"),
],
"outputs": [outputs.Textbox(label="Answer"), outputs.Label(label="Score")],
"preprocess": lambda c, q: {"context": c, "question": q},
"postprocess": lambda r: (r["answer"], r["score"]),
}
elif hasattr(transformers, "SummarizationPipeline") and isinstance(
pipeline, transformers.SummarizationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input", lines=7),
"outputs": outputs.Textbox(label="Summary"),
"preprocess": lambda x: {"inputs": x},
"postprocess": lambda r: r[0]["summary_text"],
}
elif hasattr(transformers, "TextClassificationPipeline") and isinstance(
pipeline, transformers.TextClassificationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda x: [x],
"postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
}
elif hasattr(transformers, "TextGenerationPipeline") and isinstance(
pipeline, transformers.TextGenerationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Textbox(label="Output"),
"preprocess": lambda x: {"text_inputs": x},
"postprocess": lambda r: r[0]["generated_text"],
}
elif hasattr(transformers, "TranslationPipeline") and isinstance(
pipeline, transformers.TranslationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Textbox(label="Translation"),
"preprocess": lambda x: [x],
"postprocess": lambda r: r[0]["translation_text"],
}
elif hasattr(transformers, "Text2TextGenerationPipeline") and isinstance(
pipeline, transformers.Text2TextGenerationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Textbox(label="Generated Text"),
"preprocess": lambda x: [x],
"postprocess": lambda r: r[0]["generated_text"],
}
elif hasattr(transformers, "ZeroShotClassificationPipeline") and isinstance(
pipeline, transformers.ZeroShotClassificationPipeline
):
pipeline_info = {
"inputs": [
inputs.Textbox(label="Input"),
inputs.Textbox(label="Possible class names (" "comma-separated)"),
inputs.Checkbox(label="Allow multiple true classes"),
],
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda i, c, m: {
"sequences": i,
"candidate_labels": c,
"multi_label": m,
},
"postprocess": lambda r: {
r["labels"][i]: r["scores"][i] for i in range(len(r["labels"]))
},
}
else:
raise ValueError("Unsupported pipeline type: {}".format(type(pipeline)))
# define the function that will be called by the Interface |
Gets the appropriate Interface kwargs for a given Hugging Face transformers.Pipeline.
pipeline (transformers.Pipeline): the transformers.Pipeline from which to create an interface
Returns:
(dict): a dictionary of kwargs that can be used to construct an Interface object
| 36 | 440 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_from_pipeline(pipeline):
try:
import transformers
except ImportError:
raise ImportError(
"transformers not installed. Please try `pip install transformers`"
)
if not isinstance(pipeline, transformers.Pipeline):
raise ValueError("pipeline must be a transformers.Pipeline")
# Handle the different pipelines. The has_attr() checks to make sure the pipeline exists in the
# version of the transformers library that the user has installed.
if hasattr(transformers, "AudioClassificationPipeline") and isinstance(
pipeline, transformers.AudioClassificationPipeline
):
pipeline_info = {
"inputs": inputs.Audio(label="Input", source="microphone", type="filepath"),
"outputs": outputs.Label(label="Class", type="confidences"),
"preprocess": lambda i: {"inputs": i},
"postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
}
elif hasattr(transformers, "AutomaticSpeechRecognitionPipeline") and isinstance(
pipeline, transformers.AutomaticSpeechRecognitionPipeline
):
pipeline_info = {
"inputs": inputs.Audio(label="Input", source="microphone", type="filepath"),
"outputs": outputs.Textbox(label="Output"),
"preprocess": lambda i: {"inputs": i},
"postprocess": lambda r: r["text"],
}
elif hasattr(transformers, "FeatureExtractionPipeline") and isinstance(
pipeline, transformers.FeatureExtractionPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Dataframe(label="Output"),
"preprocess": lambda x: {"inputs": x},
"postprocess": lambda r: r[0],
}
elif hasattr(transformers, "FillMaskPipeline") and isinstance(
pipeline, transformers.FillMaskPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda x: {"inputs": x},
"postprocess": lambda r: {i["token_str"]: i["score"] for i in r},
}
elif hasattr(transformers, "ImageClassificationPipeline") and isinstance(
pipeline, transformers.ImageClassificationPipeline
):
pipeline_info = {
"inputs": inputs.Image(label="Input Image", type="filepath"),
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda i: {"images": i},
"postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
}
elif hasattr(transformers, "QuestionAnsweringPipeline") and isinstance(
pipeline, transformers.QuestionAnsweringPipeline
):
pipeline_info = {
"inputs": [
inputs.Textbox(label="Context", lines=7),
inputs.Textbox(label="Question"),
],
"outputs": [outputs.Textbox(label="Answer"), outputs.Label(label="Score")],
"preprocess": lambda c, q: {"context": c, "question": q},
"postprocess": lambda r: (r["answer"], r["score"]),
}
elif hasattr(transformers, "SummarizationPipeline") and isinstance(
pipeline, transformers.SummarizationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input", lines=7),
"outputs": outputs.Textbox(label="Summary"),
"preprocess": lambda x: {"inputs": x},
"postprocess": lambda r: r[0]["summary_text"],
}
elif hasattr(transformers, "TextClassificationPipeline") and isinstance(
pipeline, transformers.TextClassificationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda x: [x],
"postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
}
elif hasattr(transformers, "TextGenerationPipeline") and isinstance(
pipeline, transformers.TextGenerationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Textbox(label="Output"),
"preprocess": lambda x: {"text_inputs": x},
"postprocess": lambda r: r[0]["generated_text"],
}
elif hasattr(transformers, "TranslationPipeline") and isinstance(
pipeline, transformers.TranslationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Textbox(label="Translation"),
"preprocess": lambda x: [x],
"postprocess": lambda r: r[0]["translation_text"],
}
elif hasattr(transformers, "Text2TextGenerationPipeline") and isinstance(
pipeline, transformers.Text2TextGenerationPipeline
):
pipeline_info = {
"inputs": inputs.Textbox(label="Input"),
"outputs": outputs.Textbox(label="Generated Text"),
"preprocess": lambda x: [x],
"postprocess": lambda r: r[0]["generated_text"],
}
elif hasattr(transformers, "ZeroShotClassificationPipeline") and isinstance(
pipeline, transformers.ZeroShotClassificationPipeline
):
pipeline_info = {
"inputs": [
inputs.Textbox(label="Input"),
inputs.Textbox(label="Possible class names (" "comma-separated)"),
inputs.Checkbox(label="Allow multiple true classes"),
],
"outputs": outputs.Label(label="Classification", type="confidences"),
"preprocess": lambda i, c, m: {
"sequences": i,
"candidate_labels": c,
"multi_label": m,
},
"postprocess": lambda r: {
r["labels"][i]: r["scores"][i] for i in range(len(r["labels"]))
},
}
else:
raise ValueError("Unsupported pipeline type: {}".format(type(pipeline)))
# define the function that will be called by the Interface
```
###Assistant :
Gets the appropriate Interface kwargs for a given Hugging Face transformers.Pipeline.
pipeline (transformers.Pipeline): the transformers.Pipeline from which to create an interface
Returns:
(dict): a dictionary of kwargs that can be used to construct an Interface object
|
929 | def calculate_bounds_for_mechanism(value_array, min_val_array, max_val_array):
# TODO: Double check whether the iDPGaussianMechanism class squares its squared_l2_norm values!!
worst_case_l2_norm = np.sqrt(np.sum(np.square(max_val_array - min_val_array))) * np.ones_like(value_array)
l2_norm = np.sqrt(np.sum(np.square(value_array))) * np.ones_like(value_array)
# print(l2_norm.shape, worst_case_l2_norm.shape)
# print(l2_norm.shape)
return l2_norm, worst_case_l2_norm
| Calculates the squared L2 norm values needed to create a Mechanism, and calculate privacy budget + spend If you calculate the privacy budget spend with the worst case bound, you can show this number to the D.S.
If you calculate it with the regular value (the value computed below when public_only = False, you cannot show the
privacy budget to the DS because this violates privacy.
| 66 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def calculate_bounds_for_mechanism(value_array, min_val_array, max_val_array):
# TODO: Double check whether the iDPGaussianMechanism class squares its squared_l2_norm values!!
worst_case_l2_norm = np.sqrt(np.sum(np.square(max_val_array - min_val_array))) * np.ones_like(value_array)
l2_norm = np.sqrt(np.sum(np.square(value_array))) * np.ones_like(value_array)
# print(l2_norm.shape, worst_case_l2_norm.shape)
# print(l2_norm.shape)
return l2_norm, worst_case_l2_norm
```
###Assistant : Calculates the squared L2 norm values needed to create a Mechanism, and calculate privacy budget + spend If you calculate the privacy budget spend with the worst case bound, you can show this number to the D.S.
If you calculate it with the regular value (the value computed below when public_only = False, you cannot show the
privacy budget to the DS because this violates privacy.
|
930 | def get_po_entries(conditions):
return frappe.db.sql(
.format(
conditions=conditions
),
as_dict=1,
) # nosec
|
SELECT
child.name,
child.parent,
child.cost_center,
child.project,
child.warehouse,
child.material_request,
child.material_request_item,
child.item_code,
child.stock_uom,
child.qty,
child.amount,
child.base_amount,
child.schedule_date,
parent.transaction_date,
parent.supplier,
parent.status,
parent.owner
FROM `tabPurchase Order` parent, `tabPurchase Order Item` child
WHERE
parent.docstatus = 1
AND parent.name = child.parent
AND parent.status not in ("Closed","Completed","Cancelled")
{conditions}
GROUP BY
parent.name, child.item_code
| 44 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_po_entries(conditions):
return frappe.db.sql(
.format(
conditions=conditions
),
as_dict=1,
) # nosec
```
###Assistant :
SELECT
child.name,
child.parent,
child.cost_center,
child.project,
child.warehouse,
child.material_request,
child.material_request_item,
child.item_code,
child.stock_uom,
child.qty,
child.amount,
child.base_amount,
child.schedule_date,
parent.transaction_date,
parent.supplier,
parent.status,
parent.owner
FROM `tabPurchase Order` parent, `tabPurchase Order Item` child
WHERE
parent.docstatus = 1
AND parent.name = child.parent
AND parent.status not in ("Closed","Completed","Cancelled")
{conditions}
GROUP BY
parent.name, child.item_code
|
931 | def get_year(self):
year = self.year
if year is None:
try:
year = self.kwargs["year"]
except KeyError:
try:
year = self.request.GET["year"]
except KeyError:
raise Http404(_("No year specified"))
return year
| Return the year for which this view should display data. | 10 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_year(self):
year = self.year
if year is None:
try:
year = self.kwargs["year"]
except KeyError:
try:
year = self.request.GET["year"]
except KeyError:
raise Http404(_("No year specified"))
return year
```
###Assistant : Return the year for which this view should display data.
|
932 | def test_overwrite_storage_path(self):
call_command("document_retagger", "--storage_path", "--overwrite")
d_first, d_second, d_unrelated, d_auto = self.get_updated_docs()
self.assertEqual(d_first.storage_path, self.sp2)
self.assertEqual(d_auto.storage_path, self.sp1)
self.assertIsNone(d_second.storage_path)
self.assertEqual(d_unrelated.storage_path, self.sp2)
|
GIVEN:
- 2 storage paths with documents which match them
- 1 document which matches but has a storage path
WHEN:
- document retagger is called with overwrite
THEN:
- Matching document's storage paths updated
- Non-matching documents have no storage path
- Existing storage patch overwritten
| 47 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_overwrite_storage_path(self):
call_command("document_retagger", "--storage_path", "--overwrite")
d_first, d_second, d_unrelated, d_auto = self.get_updated_docs()
self.assertEqual(d_first.storage_path, self.sp2)
self.assertEqual(d_auto.storage_path, self.sp1)
self.assertIsNone(d_second.storage_path)
self.assertEqual(d_unrelated.storage_path, self.sp2)
```
###Assistant :
GIVEN:
- 2 storage paths with documents which match them
- 1 document which matches but has a storage path
WHEN:
- document retagger is called with overwrite
THEN:
- Matching document's storage paths updated
- Non-matching documents have no storage path
- Existing storage patch overwritten
|
933 | def invert(self):
return DataFrameDefault.register(pandas.DataFrame.__invert__)(self)
|
Apply bitwise inversion for each element of the QueryCompiler.
Returns
-------
BaseQueryCompiler
New QueryCompiler containing bitwise inversion for each value.
| 20 | 4 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def invert(self):
return DataFrameDefault.register(pandas.DataFrame.__invert__)(self)
```
###Assistant :
Apply bitwise inversion for each element of the QueryCompiler.
Returns
-------
BaseQueryCompiler
New QueryCompiler containing bitwise inversion for each value.
|
934 | def test_center_head_loss(self):
s = 256
img_metas = [{'batch_input_shape': (s, s, 3)}]
test_cfg = dict(topK=100, max_per_img=100)
centernet_head = CenterNetHead(
num_classes=4, in_channels=1, feat_channels=4, test_cfg=test_cfg)
feat = [torch.rand(1, 1, s, s)]
center_out, wh_out, offset_out = centernet_head.forward(feat)
# Test that empty ground truth encourages the network to
# predict background
gt_instances = InstanceData()
gt_instances.bboxes = torch.empty((0, 4))
gt_instances.labels = torch.LongTensor([])
empty_gt_losses = centernet_head.loss(center_out, wh_out, offset_out,
[gt_instances], img_metas)
loss_center = empty_gt_losses['loss_center_heatmap']
loss_wh = empty_gt_losses['loss_wh']
loss_offset = empty_gt_losses['loss_offset']
assert loss_center.item() > 0, 'loss_center should be non-zero'
assert loss_wh.item() == 0, (
'there should be no loss_wh when there are no true boxes')
assert loss_offset.item() == 0, (
'there should be no loss_offset when there are no true boxes')
# When truth is non-empty then both cls and box loss
# should be nonzero for random inputs
gt_instances = InstanceData()
gt_instances.bboxes = torch.Tensor(
[[23.6667, 23.8757, 238.6326, 151.8874]])
gt_instances.labels = torch.LongTensor([2])
one_gt_losses = centernet_head.loss(center_out, wh_out, offset_out,
[gt_instances], img_metas)
loss_center = one_gt_losses['loss_center_heatmap']
loss_wh = one_gt_losses['loss_wh']
loss_offset = one_gt_losses['loss_offset']
assert loss_center.item() > 0, 'loss_center should be non-zero'
assert loss_wh.item() > 0, 'loss_wh should be non-zero'
assert loss_offset.item() > 0, 'loss_offset should be non-zero'
| Tests center head loss when truth is empty and non-empty. | 10 | 183 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_center_head_loss(self):
s = 256
img_metas = [{'batch_input_shape': (s, s, 3)}]
test_cfg = dict(topK=100, max_per_img=100)
centernet_head = CenterNetHead(
num_classes=4, in_channels=1, feat_channels=4, test_cfg=test_cfg)
feat = [torch.rand(1, 1, s, s)]
center_out, wh_out, offset_out = centernet_head.forward(feat)
# Test that empty ground truth encourages the network to
# predict background
gt_instances = InstanceData()
gt_instances.bboxes = torch.empty((0, 4))
gt_instances.labels = torch.LongTensor([])
empty_gt_losses = centernet_head.loss(center_out, wh_out, offset_out,
[gt_instances], img_metas)
loss_center = empty_gt_losses['loss_center_heatmap']
loss_wh = empty_gt_losses['loss_wh']
loss_offset = empty_gt_losses['loss_offset']
assert loss_center.item() > 0, 'loss_center should be non-zero'
assert loss_wh.item() == 0, (
'there should be no loss_wh when there are no true boxes')
assert loss_offset.item() == 0, (
'there should be no loss_offset when there are no true boxes')
# When truth is non-empty then both cls and box loss
# should be nonzero for random inputs
gt_instances = InstanceData()
gt_instances.bboxes = torch.Tensor(
[[23.6667, 23.8757, 238.6326, 151.8874]])
gt_instances.labels = torch.LongTensor([2])
one_gt_losses = centernet_head.loss(center_out, wh_out, offset_out,
[gt_instances], img_metas)
loss_center = one_gt_losses['loss_center_heatmap']
loss_wh = one_gt_losses['loss_wh']
loss_offset = one_gt_losses['loss_offset']
assert loss_center.item() > 0, 'loss_center should be non-zero'
assert loss_wh.item() > 0, 'loss_wh should be non-zero'
assert loss_offset.item() > 0, 'loss_offset should be non-zero'
```
###Assistant : Tests center head loss when truth is empty and non-empty.
|
935 | def from_package(package):
spec = wrap_spec(package)
reader = spec.loader.get_resource_reader(spec.name)
return reader.files()
@contextlib.contextmanager |
Return a Traversable object for the given package.
| 8 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def from_package(package):
spec = wrap_spec(package)
reader = spec.loader.get_resource_reader(spec.name)
return reader.files()
@contextlib.contextmanager
```
###Assistant :
Return a Traversable object for the given package.
|
936 | def _is_refund_ongoing(payment):
return (
payment.transactions.filter(
kind=TransactionKind.REFUND_ONGOING, is_success=True
).exists()
if payment
else False
)
| Return True if refund is ongoing for given payment. | 9 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _is_refund_ongoing(payment):
return (
payment.transactions.filter(
kind=TransactionKind.REFUND_ONGOING, is_success=True
).exists()
if payment
else False
)
```
###Assistant : Return True if refund is ongoing for given payment.
|
937 | def installed(name, updates=None):
if isinstance(updates, str):
updates = [updates]
if not updates:
updates = name
ret = {"name": name, "changes": {}, "result": True, "comment": ""}
wua = salt.utils.win_update.WindowsUpdateAgent()
# Search for updates
install_list = wua.search(updates)
# No updates found
if install_list.count() == 0:
ret["comment"] = "No updates found"
return ret
# List of updates to download
download = salt.utils.win_update.Updates()
for item in install_list.updates:
if not salt.utils.data.is_true(item.IsDownloaded):
download.updates.Add(item)
# List of updates to install
install = salt.utils.win_update.Updates()
installed_updates = []
for item in install_list.updates:
if not salt.utils.data.is_true(item.IsInstalled):
install.updates.Add(item)
else:
installed_updates.extend("KB" + kb for kb in item.KBArticleIDs)
if install.count() == 0:
ret["comment"] = "Updates already installed: "
ret["comment"] += "\n - ".join(installed_updates)
return ret
# Return comment of changes if test.
if __opts__["test"]:
ret["result"] = None
ret["comment"] = "Updates will be installed:"
for update in install.updates:
ret["comment"] += "\n"
ret["comment"] += ": ".join([update.Identity.UpdateID, update.Title])
return ret
# Download updates
wua.download(download)
# Install updates
wua.install(install)
# Refresh windows update info
wua.refresh()
post_info = wua.updates().list()
# Verify the installation
for item in install.list():
if not salt.utils.data.is_true(post_info[item]["Installed"]):
ret["changes"]["failed"] = {
item: {
"Title": post_info[item]["Title"],
"KBs": post_info[item]["KBs"],
}
}
ret["result"] = False
else:
ret["changes"]["installed"] = {
item: {
"Title": post_info[item]["Title"],
"NeedsReboot": post_info[item]["NeedsReboot"],
"KBs": post_info[item]["KBs"],
}
}
if ret["changes"].get("failed", False):
ret["comment"] = "Updates failed"
else:
ret["comment"] = "Updates installed successfully"
return ret
|
Ensure Microsoft Updates are installed. Updates will be downloaded if
needed.
Args:
name (str):
The identifier of a single update to install.
updates (list):
A list of identifiers for updates to be installed. Overrides
``name``. Default is None.
.. note:: Identifiers can be the GUID, the KB number, or any part of the
Title of the Microsoft update. GUIDs and KBs are the preferred method
to ensure you're installing the correct update.
.. warning:: Using a partial KB number or a partial Title could result in
more than one update being installed.
Returns:
dict: A dictionary containing the results of the update
CLI Example:
.. code-block:: yaml
# using a GUID
install_update:
wua.installed:
- name: 28cf1b09-2b1a-458c-9bd1-971d1b26b211
# using a KB
install_update:
wua.installed:
- name: KB3194343
# using the full Title
install_update:
wua.installed:
- name: Security Update for Adobe Flash Player for Windows 10 Version 1607 (for x64-based Systems) (KB3194343)
# Install multiple updates
install_updates:
wua.installed:
- updates:
- KB3194343
- 28cf1b09-2b1a-458c-9bd1-971d1b26b211
| 161 | 215 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def installed(name, updates=None):
if isinstance(updates, str):
updates = [updates]
if not updates:
updates = name
ret = {"name": name, "changes": {}, "result": True, "comment": ""}
wua = salt.utils.win_update.WindowsUpdateAgent()
# Search for updates
install_list = wua.search(updates)
# No updates found
if install_list.count() == 0:
ret["comment"] = "No updates found"
return ret
# List of updates to download
download = salt.utils.win_update.Updates()
for item in install_list.updates:
if not salt.utils.data.is_true(item.IsDownloaded):
download.updates.Add(item)
# List of updates to install
install = salt.utils.win_update.Updates()
installed_updates = []
for item in install_list.updates:
if not salt.utils.data.is_true(item.IsInstalled):
install.updates.Add(item)
else:
installed_updates.extend("KB" + kb for kb in item.KBArticleIDs)
if install.count() == 0:
ret["comment"] = "Updates already installed: "
ret["comment"] += "\n - ".join(installed_updates)
return ret
# Return comment of changes if test.
if __opts__["test"]:
ret["result"] = None
ret["comment"] = "Updates will be installed:"
for update in install.updates:
ret["comment"] += "\n"
ret["comment"] += ": ".join([update.Identity.UpdateID, update.Title])
return ret
# Download updates
wua.download(download)
# Install updates
wua.install(install)
# Refresh windows update info
wua.refresh()
post_info = wua.updates().list()
# Verify the installation
for item in install.list():
if not salt.utils.data.is_true(post_info[item]["Installed"]):
ret["changes"]["failed"] = {
item: {
"Title": post_info[item]["Title"],
"KBs": post_info[item]["KBs"],
}
}
ret["result"] = False
else:
ret["changes"]["installed"] = {
item: {
"Title": post_info[item]["Title"],
"NeedsReboot": post_info[item]["NeedsReboot"],
"KBs": post_info[item]["KBs"],
}
}
if ret["changes"].get("failed", False):
ret["comment"] = "Updates failed"
else:
ret["comment"] = "Updates installed successfully"
return ret
```
###Assistant :
Ensure Microsoft Updates are installed. Updates will be downloaded if
needed.
Args:
name (str):
The identifier of a single update to install.
updates (list):
A list of identifiers for updates to be installed. Overrides
``name``. Default is None.
.. note:: Identifiers can be the GUID, the KB number, or any part of the
Title of the Microsoft update. GUIDs and KBs are the preferred method
to ensure you're installing the correct update.
.. warning:: Using a partial KB number or a partial Title could result in
more than one update being installed.
Returns:
dict: A dictionary containing the results of the update
CLI Example:
.. code-block:: yaml
# using a GUID
install_update:
wua.installed:
- name: 28cf1b09-2b1a-458c-9bd1-971d1b26b211
# using a KB
install_update:
wua.installed:
- name: KB3194343
# using the full Title
install_update:
wua.installed:
- name: Security Update for Adobe Flash Player for Windows 10 Version 1607 (for x64-based Systems) (KB3194343)
# Install multiple updates
install_updates:
wua.installed:
- updates:
- KB3194343
- 28cf1b09-2b1a-458c-9bd1-971d1b26b211
|
938 | def _getKivyInformation(self):
setup_codes = r
info = self.queryRuntimeInformationMultiple(
info_name="kivy_info",
setup_codes=setup_codes,
values=(
("libs_loaded", "kivy.core.image.libs_loaded"),
("window_impl", "kivy.core.window.window_impl"),
("label_libs", "kivy.core.text.label_libs"),
("sdl2_dep_bins", "sdl2_dep_bins"),
("glew_dep_bins", "glew_dep_bins"),
),
)
if info is None:
self.sysexit("Error, it seems Kivy is not installed.")
return info
|
import kivy.core.image
import kivy.core.text
# Prevent Window from being created at compile time.
kivy.core.core_select_lib=(lambda *args, **kwargs: None)
import kivy.core.window
# Kivy has packages designed to provide these on Windows
try:
from kivy_deps.sdl2 import dep_bins as sdl2_dep_bins
except ImportError:
sdl2_dep_bins = []
try:
from kivy_deps.glew import dep_bins as glew_dep_bins
except ImportError:
glew_dep_bins = []
| 53 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _getKivyInformation(self):
setup_codes = r
info = self.queryRuntimeInformationMultiple(
info_name="kivy_info",
setup_codes=setup_codes,
values=(
("libs_loaded", "kivy.core.image.libs_loaded"),
("window_impl", "kivy.core.window.window_impl"),
("label_libs", "kivy.core.text.label_libs"),
("sdl2_dep_bins", "sdl2_dep_bins"),
("glew_dep_bins", "glew_dep_bins"),
),
)
if info is None:
self.sysexit("Error, it seems Kivy is not installed.")
return info
```
###Assistant :
import kivy.core.image
import kivy.core.text
# Prevent Window from being created at compile time.
kivy.core.core_select_lib=(lambda *args, **kwargs: None)
import kivy.core.window
# Kivy has packages designed to provide these on Windows
try:
from kivy_deps.sdl2 import dep_bins as sdl2_dep_bins
except ImportError:
sdl2_dep_bins = []
try:
from kivy_deps.glew import dep_bins as glew_dep_bins
except ImportError:
glew_dep_bins = []
|
939 | def __iter__(self) -> Iterator:
return iter(self._info_axis)
# can we get a better explanation of this? |
Iterate over info axis.
Returns
-------
iterator
Info axis as iterator.
| 11 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __iter__(self) -> Iterator:
return iter(self._info_axis)
# can we get a better explanation of this?
```
###Assistant :
Iterate over info axis.
Returns
-------
iterator
Info axis as iterator.
|
940 | def cast(self, target_schema, *args, **kwargs):
table = table_cast(self.table, target_schema, *args, **kwargs)
blocks = []
for subtables in self.blocks:
new_tables = []
fields = list(target_schema)
for subtable in subtables:
subfields = []
for name in subtable.column_names:
subfields.append(fields.pop(next(i for i, field in enumerate(fields) if field.name == name)))
subschema = pa.schema(subfields)
new_tables.append(subtable.cast(subschema, *args, **kwargs))
blocks.append(new_tables)
return ConcatenationTable(table, blocks)
|
Cast table values to another schema
Args:
target_schema (:obj:`Schema`):
Schema to cast to, the names and order of fields must match
safe (:obj:`bool`, defaults to :obj:`True`):
Check for overflows or other unsafe conversions
Returns:
:class:`datasets.table.Table`:
| 35 | 55 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cast(self, target_schema, *args, **kwargs):
table = table_cast(self.table, target_schema, *args, **kwargs)
blocks = []
for subtables in self.blocks:
new_tables = []
fields = list(target_schema)
for subtable in subtables:
subfields = []
for name in subtable.column_names:
subfields.append(fields.pop(next(i for i, field in enumerate(fields) if field.name == name)))
subschema = pa.schema(subfields)
new_tables.append(subtable.cast(subschema, *args, **kwargs))
blocks.append(new_tables)
return ConcatenationTable(table, blocks)
```
###Assistant :
Cast table values to another schema
Args:
target_schema (:obj:`Schema`):
Schema to cast to, the names and order of fields must match
safe (:obj:`bool`, defaults to :obj:`True`):
Check for overflows or other unsafe conversions
Returns:
:class:`datasets.table.Table`:
|
941 | def softmax(p, axis=None, temperature=1):
if axis is None:
axis = p.ndim - 1
if temperature == 0.:
# NOTE: in case of multiple equal maxima, returns uniform distribution.
p = p == np.max(p, axis=axis, keepdims=True)
else:
# oldp = p
logp = np.log(p)
logp /= temperature
logp -= logp.max(axis=axis, keepdims=True)
p = np.exp(logp)
p /= p.sum(axis=axis, keepdims=True)
if np.isnan(p).any():
pdb.set_trace() # pylint: disable=forgotten-debug-statement
return p
| Apply the softmax transform to an array of categorical distributions.
Args:
p: an array of categorical probability vectors, possibly unnormalized.
axis: the axis that spans the categories (default: -1).
temperature: if not 1, transform the distribution by dividing the log
probabilities and renormalizing. Values greater than 1 increase entropy,
values less than 1 decrease entropy. A value of 0 yields a deterministic
distribution that chooses the mode.
Returns:
An array of categorical probability vectors, like `p` but tempered and
normalized.
| 80 | 65 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def softmax(p, axis=None, temperature=1):
if axis is None:
axis = p.ndim - 1
if temperature == 0.:
# NOTE: in case of multiple equal maxima, returns uniform distribution.
p = p == np.max(p, axis=axis, keepdims=True)
else:
# oldp = p
logp = np.log(p)
logp /= temperature
logp -= logp.max(axis=axis, keepdims=True)
p = np.exp(logp)
p /= p.sum(axis=axis, keepdims=True)
if np.isnan(p).any():
pdb.set_trace() # pylint: disable=forgotten-debug-statement
return p
```
###Assistant : Apply the softmax transform to an array of categorical distributions.
Args:
p: an array of categorical probability vectors, possibly unnormalized.
axis: the axis that spans the categories (default: -1).
temperature: if not 1, transform the distribution by dividing the log
probabilities and renormalizing. Values greater than 1 increase entropy,
values less than 1 decrease entropy. A value of 0 yields a deterministic
distribution that chooses the mode.
Returns:
An array of categorical probability vectors, like `p` but tempered and
normalized.
|
942 | def _populate_static_information(self) -> None:
self.info["ludwig_version"] = LUDWIG_VERSION
self.info["start_disk_usage"] = shutil.disk_usage(os.path.expanduser("~")).used
# CPU information
cpu_info = get_my_cpu_info()
self.info["cpu_architecture"] = cpu_info["arch"]
self.info["num_cpu"] = psutil.cpu_count()
self.info["cpu_name"] = cpu_info["brand_raw"]
self.info["total_cpu_memory_size"] = psutil.virtual_memory().total
# GPU information
if self.cuda_is_available:
gpu_infos = get_gpu_info()
for i, gpu_info in enumerate(gpu_infos):
gpu_key = f"cuda_{i}"
self.info[f"{gpu_key}_memory_used"] = []
self.info[f"{gpu_key}_name"] = gpu_info["name"]
self.info[f"{gpu_key}_total_memory"] = gpu_info["total_memory"]
self.info[f"{gpu_key}_driver_version"] = gpu_info["driver_version"]
self.info[f"{gpu_key}_cuda_version"] = gpu_info["cuda_version"]
# recording in microseconds to be in line with torch profiler time recording.
self.info["start_time"] = time.perf_counter_ns() / 1000
| Populate the report with static software and hardware information. | 9 | 77 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _populate_static_information(self) -> None:
self.info["ludwig_version"] = LUDWIG_VERSION
self.info["start_disk_usage"] = shutil.disk_usage(os.path.expanduser("~")).used
# CPU information
cpu_info = get_my_cpu_info()
self.info["cpu_architecture"] = cpu_info["arch"]
self.info["num_cpu"] = psutil.cpu_count()
self.info["cpu_name"] = cpu_info["brand_raw"]
self.info["total_cpu_memory_size"] = psutil.virtual_memory().total
# GPU information
if self.cuda_is_available:
gpu_infos = get_gpu_info()
for i, gpu_info in enumerate(gpu_infos):
gpu_key = f"cuda_{i}"
self.info[f"{gpu_key}_memory_used"] = []
self.info[f"{gpu_key}_name"] = gpu_info["name"]
self.info[f"{gpu_key}_total_memory"] = gpu_info["total_memory"]
self.info[f"{gpu_key}_driver_version"] = gpu_info["driver_version"]
self.info[f"{gpu_key}_cuda_version"] = gpu_info["cuda_version"]
# recording in microseconds to be in line with torch profiler time recording.
self.info["start_time"] = time.perf_counter_ns() / 1000
```
###Assistant : Populate the report with static software and hardware information.
|
943 | def assertCanNotCreateAt(self, parent_model, child_model, msg=None):
if self._testCanCreateAt(parent_model, child_model):
msg = self._formatMessage(
msg,
"Can create a %s.%s under a %s.%s"
% (
child_model._meta.app_label,
child_model._meta.model_name,
parent_model._meta.app_label,
parent_model._meta.model_name,
),
)
raise self.failureException(msg)
|
Assert a particular child Page type can not be created under a parent
Page type. ``parent_model`` and ``child_model`` should be the Page
classes being tested.
| 25 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def assertCanNotCreateAt(self, parent_model, child_model, msg=None):
if self._testCanCreateAt(parent_model, child_model):
msg = self._formatMessage(
msg,
"Can create a %s.%s under a %s.%s"
% (
child_model._meta.app_label,
child_model._meta.model_name,
parent_model._meta.app_label,
parent_model._meta.model_name,
),
)
raise self.failureException(msg)
```
###Assistant :
Assert a particular child Page type can not be created under a parent
Page type. ``parent_model`` and ``child_model`` should be the Page
classes being tested.
|
944 | def get_feature_objects(self) -> Mapping[Project, Feature]:
cls = self._manager._get_feature_class(self.feature_name)
return {obj: cls(self.feature_name, obj) for obj in self.objects}
|
Iterate over individual Feature objects.
This is a fallback mode for applying a FeatureHandler that doesn't
support checking the entire batch at once.
| 23 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_feature_objects(self) -> Mapping[Project, Feature]:
cls = self._manager._get_feature_class(self.feature_name)
return {obj: cls(self.feature_name, obj) for obj in self.objects}
```
###Assistant :
Iterate over individual Feature objects.
This is a fallback mode for applying a FeatureHandler that doesn't
support checking the entire batch at once.
|
945 | def require_ffmpeg(test_case):
import subprocess
try:
subprocess.check_output(["ffmpeg", "-h"], stderr=subprocess.DEVNULL)
return test_case
except Exception:
return unittest.skip("test requires ffmpeg")(test_case)
|
Decorator marking a test that requires FFmpeg.
These tests are skipped when FFmpeg isn't installed.
| 15 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def require_ffmpeg(test_case):
import subprocess
try:
subprocess.check_output(["ffmpeg", "-h"], stderr=subprocess.DEVNULL)
return test_case
except Exception:
return unittest.skip("test requires ffmpeg")(test_case)
```
###Assistant :
Decorator marking a test that requires FFmpeg.
These tests are skipped when FFmpeg isn't installed.
|
946 | def generate_random_string():
import random
import string
return "".join(random.choices(string.ascii_uppercase + string.digits, k=8))
random_string = generate_random_string()
# [START create_queue]
create_queue = CloudTasksQueueCreateOperator(
location=LOCATION,
task_queue=Queue(stackdriver_logging_config=dict(sampling_ratio=0.5)),
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
retry=Retry(maximum=10.0),
timeout=5,
task_id="create_queue",
)
# [END create_queue]
# [START delete_queue]
delete_queue = CloudTasksQueueDeleteOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="delete_queue",
)
# [END delete_queue]
delete_queue.trigger_rule = TriggerRule.ALL_DONE
# [START resume_queue]
resume_queue = CloudTasksQueueResumeOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="resume_queue",
)
# [END resume_queue]
# [START pause_queue]
pause_queue = CloudTasksQueuePauseOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="pause_queue",
)
# [END pause_queue]
# [START purge_queue]
purge_queue = CloudTasksQueuePurgeOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="purge_queue",
)
# [END purge_queue]
# [START get_queue]
get_queue = CloudTasksQueueGetOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="get_queue",
)
get_queue_result = BashOperator(
task_id="get_queue_result",
bash_command=f"echo {get_queue.output}",
)
# [END get_queue]
# [START update_queue]
update_queue = CloudTasksQueueUpdateOperator(
task_queue=Queue(stackdriver_logging_config=dict(sampling_ratio=1)),
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
update_mask=FieldMask(paths=["stackdriver_logging_config.sampling_ratio"]),
task_id="update_queue",
)
# [END update_queue]
# [START list_queue]
list_queue = CloudTasksQueuesListOperator(location=LOCATION, task_id="list_queue")
# [END list_queue]
chain(
random_string,
create_queue,
update_queue,
pause_queue,
resume_queue,
purge_queue,
get_queue,
get_queue_result,
list_queue,
delete_queue,
)
from tests.system.utils.watcher import watcher
# This test needs watcher in order to properly mark success/failure
# when "tearDown" task with trigger rule is part of the DAG
list(dag.tasks) >> watcher()
from tests.system.utils import get_test_run # noqa: E402
# Needed to run the example DAG with pytest (see: tests/system/README.md#run_via_pytest)
test_run = get_test_run(dag)
|
Generate random string for queue and task names.
Queue name cannot be repeated in preceding 7 days and
task name in the last 1 hour.
| 25 | 221 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def generate_random_string():
import random
import string
return "".join(random.choices(string.ascii_uppercase + string.digits, k=8))
random_string = generate_random_string()
# [START create_queue]
create_queue = CloudTasksQueueCreateOperator(
location=LOCATION,
task_queue=Queue(stackdriver_logging_config=dict(sampling_ratio=0.5)),
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
retry=Retry(maximum=10.0),
timeout=5,
task_id="create_queue",
)
# [END create_queue]
# [START delete_queue]
delete_queue = CloudTasksQueueDeleteOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="delete_queue",
)
# [END delete_queue]
delete_queue.trigger_rule = TriggerRule.ALL_DONE
# [START resume_queue]
resume_queue = CloudTasksQueueResumeOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="resume_queue",
)
# [END resume_queue]
# [START pause_queue]
pause_queue = CloudTasksQueuePauseOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="pause_queue",
)
# [END pause_queue]
# [START purge_queue]
purge_queue = CloudTasksQueuePurgeOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="purge_queue",
)
# [END purge_queue]
# [START get_queue]
get_queue = CloudTasksQueueGetOperator(
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
task_id="get_queue",
)
get_queue_result = BashOperator(
task_id="get_queue_result",
bash_command=f"echo {get_queue.output}",
)
# [END get_queue]
# [START update_queue]
update_queue = CloudTasksQueueUpdateOperator(
task_queue=Queue(stackdriver_logging_config=dict(sampling_ratio=1)),
location=LOCATION,
queue_name=QUEUE_ID + "{{ task_instance.xcom_pull(task_ids='random_string') }}",
update_mask=FieldMask(paths=["stackdriver_logging_config.sampling_ratio"]),
task_id="update_queue",
)
# [END update_queue]
# [START list_queue]
list_queue = CloudTasksQueuesListOperator(location=LOCATION, task_id="list_queue")
# [END list_queue]
chain(
random_string,
create_queue,
update_queue,
pause_queue,
resume_queue,
purge_queue,
get_queue,
get_queue_result,
list_queue,
delete_queue,
)
from tests.system.utils.watcher import watcher
# This test needs watcher in order to properly mark success/failure
# when "tearDown" task with trigger rule is part of the DAG
list(dag.tasks) >> watcher()
from tests.system.utils import get_test_run # noqa: E402
# Needed to run the example DAG with pytest (see: tests/system/README.md#run_via_pytest)
test_run = get_test_run(dag)
```
###Assistant :
Generate random string for queue and task names.
Queue name cannot be repeated in preceding 7 days and
task name in the last 1 hour.
|
947 | def make_gradient_clipvalue_fn(clipvalue):
if clipvalue is None:
return lambda grads_and_vars: grads_and_vars
| Creates a gradient transformation function for clipping by value. | 9 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def make_gradient_clipvalue_fn(clipvalue):
if clipvalue is None:
return lambda grads_and_vars: grads_and_vars
```
###Assistant : Creates a gradient transformation function for clipping by value.
|
948 | def odd_ext(x, n, axis=-1):
if n < 1:
return x
if n > x.shape[axis] - 1:
raise ValueError(
f"The extension length n ({n}) is too big. "
f"It must not exceed x.shape[axis]-1, which is {x.shape[axis] - 1}.")
left_end = lax.slice_in_dim(x, 0, 1, axis=axis)
left_ext = jnp.flip(lax.slice_in_dim(x, 1, n + 1, axis=axis), axis=axis)
right_end = lax.slice_in_dim(x, -1, None, axis=axis)
right_ext = jnp.flip(lax.slice_in_dim(x, -(n + 1), -1, axis=axis), axis=axis)
ext = jnp.concatenate((2 * left_end - left_ext,
x,
2 * right_end - right_ext),
axis=axis)
return ext
| Extends `x` along with `axis` by odd-extension.
This function was previously a part of "scipy.signal.signaltools" but is no
longer exposed.
Args:
x : input array
n : the number of points to be added to the both end
axis: the axis to be extended
| 44 | 83 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def odd_ext(x, n, axis=-1):
if n < 1:
return x
if n > x.shape[axis] - 1:
raise ValueError(
f"The extension length n ({n}) is too big. "
f"It must not exceed x.shape[axis]-1, which is {x.shape[axis] - 1}.")
left_end = lax.slice_in_dim(x, 0, 1, axis=axis)
left_ext = jnp.flip(lax.slice_in_dim(x, 1, n + 1, axis=axis), axis=axis)
right_end = lax.slice_in_dim(x, -1, None, axis=axis)
right_ext = jnp.flip(lax.slice_in_dim(x, -(n + 1), -1, axis=axis), axis=axis)
ext = jnp.concatenate((2 * left_end - left_ext,
x,
2 * right_end - right_ext),
axis=axis)
return ext
```
###Assistant : Extends `x` along with `axis` by odd-extension.
This function was previously a part of "scipy.signal.signaltools" but is no
longer exposed.
Args:
x : input array
n : the number of points to be added to the both end
axis: the axis to be extended
|
949 | def filter_on_submodules(all_modules, submodule):
filtered_modules = [
mod for mod in all_modules if PACKAGE + submodule in mod.__name__
]
return filtered_modules
| Filters all the modules based on the module flag.
The module flag has to be relative to the core package imported.
For example, if `submodule=keras.layers` then, this function will return
all the modules in the submodule.
Args:
all_modules: All the modules in the core package.
submodule: Submodule to filter from all the modules.
Returns:
All the modules in the submodule.
| 60 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def filter_on_submodules(all_modules, submodule):
filtered_modules = [
mod for mod in all_modules if PACKAGE + submodule in mod.__name__
]
return filtered_modules
```
###Assistant : Filters all the modules based on the module flag.
The module flag has to be relative to the core package imported.
For example, if `submodule=keras.layers` then, this function will return
all the modules in the submodule.
Args:
all_modules: All the modules in the core package.
submodule: Submodule to filter from all the modules.
Returns:
All the modules in the submodule.
|
950 | def get_pe_matching_query(amount_condition, account_from_to, transaction):
# get matching payment entries query
from_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_from_date")
to_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_to_date")
from_reference_date = frappe.db.get_single_value(
"Bank Reconciliation Tool", "from_reference_date"
)
to_reference_date = frappe.db.get_single_value("Bank Reconciliation Tool", "to_reference_date")
filtered_by_reference_date = frappe.db.get_single_value(
"Bank Reconciliation Tool", "filtered_by_reference_date"
)
if transaction.deposit > 0:
currency_field = "paid_to_account_currency as currency"
else:
currency_field = "paid_from_account_currency as currency"
cond_filtered_from_ref_date = ""
cond_filtered_to_ref_date = ""
cond_filtered_from_posting_date = ""
cond_filtered_to_posting_date = ""
from_ref_date =""
to_ref_date =""
from_post_date = ""
to_post_date = ""
if(filtered_by_reference_date):
cond_filtered_from_ref_date = " AND reference_date >="
cond_filtered_to_ref_date = " AND reference_date <="
from_ref_date = from_reference_date
to_ref_date = to_reference_date
elif(not filtered_by_reference_date):
cond_filtered_from_posting_date = " AND posting_date >="
cond_filtered_to_posting_date = " AND posting_date <="
from_post_date = from_date
to_post_date = to_date
pe_data= f
return pe_data
|
SELECT
(CASE WHEN reference_no=%(reference_no)s THEN 1 ELSE 0 END
+ CASE WHEN (party_type = %(party_type)s AND party = %(party)s ) THEN 1 ELSE 0 END
+ 1 ) AS rank,
'Payment Entry' as doctype,
name,
paid_amount,
reference_no,
reference_date,
party,
party_type,
posting_date,
{currency_field}
FROM
`tabPayment Entry`
WHERE
paid_amount {amount_condition} %(amount)s
AND docstatus = 1
AND payment_type IN (%(payment_type)s, 'Internal Transfer')
AND ifnull(clearance_date, '') = ""
AND {account_from_to} = %(bank_account)s
AND reference_no = '{transaction.reference_number}'
{cond_filtered_from_ref_date} "{from_ref_date}"
{cond_filtered_to_ref_date} "{to_ref_date}"
{cond_filtered_from_posting_date} "{from_post_date}"
{cond_filtered_to_posting_date} "{to_post_date}"
| 80 | 124 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_pe_matching_query(amount_condition, account_from_to, transaction):
# get matching payment entries query
from_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_from_date")
to_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_to_date")
from_reference_date = frappe.db.get_single_value(
"Bank Reconciliation Tool", "from_reference_date"
)
to_reference_date = frappe.db.get_single_value("Bank Reconciliation Tool", "to_reference_date")
filtered_by_reference_date = frappe.db.get_single_value(
"Bank Reconciliation Tool", "filtered_by_reference_date"
)
if transaction.deposit > 0:
currency_field = "paid_to_account_currency as currency"
else:
currency_field = "paid_from_account_currency as currency"
cond_filtered_from_ref_date = ""
cond_filtered_to_ref_date = ""
cond_filtered_from_posting_date = ""
cond_filtered_to_posting_date = ""
from_ref_date =""
to_ref_date =""
from_post_date = ""
to_post_date = ""
if(filtered_by_reference_date):
cond_filtered_from_ref_date = " AND reference_date >="
cond_filtered_to_ref_date = " AND reference_date <="
from_ref_date = from_reference_date
to_ref_date = to_reference_date
elif(not filtered_by_reference_date):
cond_filtered_from_posting_date = " AND posting_date >="
cond_filtered_to_posting_date = " AND posting_date <="
from_post_date = from_date
to_post_date = to_date
pe_data= f
return pe_data
```
###Assistant :
SELECT
(CASE WHEN reference_no=%(reference_no)s THEN 1 ELSE 0 END
+ CASE WHEN (party_type = %(party_type)s AND party = %(party)s ) THEN 1 ELSE 0 END
+ 1 ) AS rank,
'Payment Entry' as doctype,
name,
paid_amount,
reference_no,
reference_date,
party,
party_type,
posting_date,
{currency_field}
FROM
`tabPayment Entry`
WHERE
paid_amount {amount_condition} %(amount)s
AND docstatus = 1
AND payment_type IN (%(payment_type)s, 'Internal Transfer')
AND ifnull(clearance_date, '') = ""
AND {account_from_to} = %(bank_account)s
AND reference_no = '{transaction.reference_number}'
{cond_filtered_from_ref_date} "{from_ref_date}"
{cond_filtered_to_ref_date} "{to_ref_date}"
{cond_filtered_from_posting_date} "{from_post_date}"
{cond_filtered_to_posting_date} "{to_post_date}"
|
951 | def closed(self) -> IntervalInclusiveType:
warnings.warn(
"Attribute `closed` is deprecated in favor of `inclusive`.",
FutureWarning,
stacklevel=find_stack_level(inspect.currentframe()),
)
return self.dtype.inclusive
_interval_shared_docs["set_closed"] = textwrap.dedent(
)
|
String describing the inclusive side the intervals.
Either ``left``, ``right``, ``both`` or ``neither`.
Return an identical %(klass)s closed on the specified side.
.. deprecated:: 1.5.0
Parameters
----------
closed : {'left', 'right', 'both', 'neither'}
Whether the intervals are closed on the left-side, right-side, both
or neither.
Returns
-------
new_index : %(klass)s
%(examples)s\
| 51 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def closed(self) -> IntervalInclusiveType:
warnings.warn(
"Attribute `closed` is deprecated in favor of `inclusive`.",
FutureWarning,
stacklevel=find_stack_level(inspect.currentframe()),
)
return self.dtype.inclusive
_interval_shared_docs["set_closed"] = textwrap.dedent(
)
```
###Assistant :
String describing the inclusive side the intervals.
Either ``left``, ``right``, ``both`` or ``neither`.
Return an identical %(klass)s closed on the specified side.
.. deprecated:: 1.5.0
Parameters
----------
closed : {'left', 'right', 'both', 'neither'}
Whether the intervals are closed on the left-side, right-side, both
or neither.
Returns
-------
new_index : %(klass)s
%(examples)s\
|
952 | def as_real_imag(self, deep=True, **hints):
from sympy.functions.elementary.trigonometric import cos, sin
re, im = self.args[0].as_real_imag()
if deep:
re = re.expand(deep, **hints)
im = im.expand(deep, **hints)
cos, sin = cos(im), sin(im)
return (exp(re)*cos, exp(re)*sin)
|
Returns this function as a 2-tuple representing a complex number.
Examples
========
>>> from sympy import I, exp
>>> from sympy.abc import x
>>> exp(x).as_real_imag()
(exp(re(x))*cos(im(x)), exp(re(x))*sin(im(x)))
>>> exp(1).as_real_imag()
(E, 0)
>>> exp(I).as_real_imag()
(cos(1), sin(1))
>>> exp(1+I).as_real_imag()
(E*cos(1), E*sin(1))
See Also
========
sympy.functions.elementary.complexes.re
sympy.functions.elementary.complexes.im
| 44 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def as_real_imag(self, deep=True, **hints):
from sympy.functions.elementary.trigonometric import cos, sin
re, im = self.args[0].as_real_imag()
if deep:
re = re.expand(deep, **hints)
im = im.expand(deep, **hints)
cos, sin = cos(im), sin(im)
return (exp(re)*cos, exp(re)*sin)
```
###Assistant :
Returns this function as a 2-tuple representing a complex number.
Examples
========
>>> from sympy import I, exp
>>> from sympy.abc import x
>>> exp(x).as_real_imag()
(exp(re(x))*cos(im(x)), exp(re(x))*sin(im(x)))
>>> exp(1).as_real_imag()
(E, 0)
>>> exp(I).as_real_imag()
(cos(1), sin(1))
>>> exp(1+I).as_real_imag()
(E*cos(1), E*sin(1))
See Also
========
sympy.functions.elementary.complexes.re
sympy.functions.elementary.complexes.im
|
953 | async def _get_conversation_ids_to_process(self) -> Set[Text]:
conversation_ids_in_tracker_store = (
await self._get_conversation_ids_in_tracker()
)
if not self.requested_conversation_ids:
return conversation_ids_in_tracker_store
self._validate_all_requested_ids_exist(conversation_ids_in_tracker_store)
conversation_ids_to_process = conversation_ids_in_tracker_store & set(
self.requested_conversation_ids
)
if not conversation_ids_to_process:
raise NoEventsToMigrateError(
"Could not find an overlap between the requested "
"conversation IDs and those found in the tracker store. Exiting."
)
return conversation_ids_to_process
| Get conversation IDs that are good for processing.
Finds the intersection of events that are contained in the tracker store with
those events requested as a command-line argument.
Returns:
Conversation IDs that are both requested and contained in the tracker
store. If no conversation IDs are requested, all conversation IDs in the
tracker store are returned.
| 56 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def _get_conversation_ids_to_process(self) -> Set[Text]:
conversation_ids_in_tracker_store = (
await self._get_conversation_ids_in_tracker()
)
if not self.requested_conversation_ids:
return conversation_ids_in_tracker_store
self._validate_all_requested_ids_exist(conversation_ids_in_tracker_store)
conversation_ids_to_process = conversation_ids_in_tracker_store & set(
self.requested_conversation_ids
)
if not conversation_ids_to_process:
raise NoEventsToMigrateError(
"Could not find an overlap between the requested "
"conversation IDs and those found in the tracker store. Exiting."
)
return conversation_ids_to_process
```
###Assistant : Get conversation IDs that are good for processing.
Finds the intersection of events that are contained in the tracker store with
those events requested as a command-line argument.
Returns:
Conversation IDs that are both requested and contained in the tracker
store. If no conversation IDs are requested, all conversation IDs in the
tracker store are returned.
|
954 | def call(self, inputs, training=None, mask=None):
raise NotImplementedError(
"Unimplemented `tf.keras.Model.call()`: if you "
"intend to create a `Model` with the Functional "
"API, please provide `inputs` and `outputs` "
"arguments. Otherwise, subclass `Model` with an "
"overridden `call()` method."
)
| Calls the model on new inputs and returns the outputs as tensors.
In this case `call()` just reapplies
all ops in the graph to the new inputs
(e.g. build a new computational graph from the provided inputs).
Note: This method should not be called directly. It is only meant to be
overridden when subclassing `tf.keras.Model`.
To call a model on an input, always use the `__call__()` method,
i.e. `model(inputs)`, which relies on the underlying `call()` method.
Args:
inputs: Input tensor, or dict/list/tuple of input tensors.
training: Boolean or boolean scalar tensor, indicating whether to run
the `Network` in training mode or inference mode.
mask: A mask or list of masks. A mask can be either a boolean tensor or
None (no mask). For more details, check the guide
[here](https://www.tensorflow.org/guide/keras/masking_and_padding).
Returns:
A tensor if there is a single output, or
a list of tensors if there are more than one outputs.
| 150 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def call(self, inputs, training=None, mask=None):
raise NotImplementedError(
"Unimplemented `tf.keras.Model.call()`: if you "
"intend to create a `Model` with the Functional "
"API, please provide `inputs` and `outputs` "
"arguments. Otherwise, subclass `Model` with an "
"overridden `call()` method."
)
```
###Assistant : Calls the model on new inputs and returns the outputs as tensors.
In this case `call()` just reapplies
all ops in the graph to the new inputs
(e.g. build a new computational graph from the provided inputs).
Note: This method should not be called directly. It is only meant to be
overridden when subclassing `tf.keras.Model`.
To call a model on an input, always use the `__call__()` method,
i.e. `model(inputs)`, which relies on the underlying `call()` method.
Args:
inputs: Input tensor, or dict/list/tuple of input tensors.
training: Boolean or boolean scalar tensor, indicating whether to run
the `Network` in training mode or inference mode.
mask: A mask or list of masks. A mask can be either a boolean tensor or
None (no mask). For more details, check the guide
[here](https://www.tensorflow.org/guide/keras/masking_and_padding).
Returns:
A tensor if there is a single output, or
a list of tensors if there are more than one outputs.
|
955 | def _read(cls, path_or_buf, **kwargs):
path_or_buf = cls.get_path_or_buffer(path_or_buf)
if isinstance(path_or_buf, str):
if not cls.file_exists(path_or_buf):
return cls.single_worker_read(path_or_buf, **kwargs)
path_or_buf = cls.get_path(path_or_buf)
elif not cls.pathlib_or_pypath(path_or_buf):
return cls.single_worker_read(path_or_buf, **kwargs)
if not kwargs.get("lines", False):
return cls.single_worker_read(path_or_buf, **kwargs)
with OpenFile(path_or_buf, "rb") as f:
columns = pandas.read_json(BytesIO(b"" + f.readline()), lines=True).columns
kwargs["columns"] = columns
empty_pd_df = pandas.DataFrame(columns=columns)
with OpenFile(path_or_buf, "rb", kwargs.get("compression", "infer")) as f:
partition_ids = []
index_ids = []
dtypes_ids = []
column_widths, num_splits = cls._define_metadata(empty_pd_df, columns)
args = {"fname": path_or_buf, "num_splits": num_splits, **kwargs}
splits = cls.partitioned_file(
f,
num_partitions=NPartitions.get(),
)
for start, end in splits:
args.update({"start": start, "end": end})
partition_id = cls.deploy(cls.parse, num_returns=num_splits + 3, **args)
partition_ids.append(partition_id[:-3])
index_ids.append(partition_id[-3])
dtypes_ids.append(partition_id[-2])
# partition_id[-1] contains the columns for each partition, which will be useful
# for implementing when `lines=False`.
row_lengths = cls.materialize(index_ids)
new_index = pandas.RangeIndex(sum(row_lengths))
dtypes = cls.get_dtypes(dtypes_ids)
partition_ids = cls.build_partition(partition_ids, row_lengths, column_widths)
if isinstance(dtypes, pandas.Series):
dtypes.index = columns
else:
dtypes = pandas.Series(dtypes, index=columns)
new_frame = cls.frame_cls(
np.array(partition_ids),
new_index,
columns,
row_lengths,
column_widths,
dtypes=dtypes,
)
new_frame.synchronize_labels(axis=0)
return cls.query_compiler_cls(new_frame)
|
Read data from `path_or_buf` according to the passed `read_json` `kwargs` parameters.
Parameters
----------
path_or_buf : str, path object or file-like object
`path_or_buf` parameter of `read_json` function.
**kwargs : dict
Parameters of `read_json` function.
Returns
-------
BaseQueryCompiler
Query compiler with imported data for further processing.
| 44 | 157 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _read(cls, path_or_buf, **kwargs):
path_or_buf = cls.get_path_or_buffer(path_or_buf)
if isinstance(path_or_buf, str):
if not cls.file_exists(path_or_buf):
return cls.single_worker_read(path_or_buf, **kwargs)
path_or_buf = cls.get_path(path_or_buf)
elif not cls.pathlib_or_pypath(path_or_buf):
return cls.single_worker_read(path_or_buf, **kwargs)
if not kwargs.get("lines", False):
return cls.single_worker_read(path_or_buf, **kwargs)
with OpenFile(path_or_buf, "rb") as f:
columns = pandas.read_json(BytesIO(b"" + f.readline()), lines=True).columns
kwargs["columns"] = columns
empty_pd_df = pandas.DataFrame(columns=columns)
with OpenFile(path_or_buf, "rb", kwargs.get("compression", "infer")) as f:
partition_ids = []
index_ids = []
dtypes_ids = []
column_widths, num_splits = cls._define_metadata(empty_pd_df, columns)
args = {"fname": path_or_buf, "num_splits": num_splits, **kwargs}
splits = cls.partitioned_file(
f,
num_partitions=NPartitions.get(),
)
for start, end in splits:
args.update({"start": start, "end": end})
partition_id = cls.deploy(cls.parse, num_returns=num_splits + 3, **args)
partition_ids.append(partition_id[:-3])
index_ids.append(partition_id[-3])
dtypes_ids.append(partition_id[-2])
# partition_id[-1] contains the columns for each partition, which will be useful
# for implementing when `lines=False`.
row_lengths = cls.materialize(index_ids)
new_index = pandas.RangeIndex(sum(row_lengths))
dtypes = cls.get_dtypes(dtypes_ids)
partition_ids = cls.build_partition(partition_ids, row_lengths, column_widths)
if isinstance(dtypes, pandas.Series):
dtypes.index = columns
else:
dtypes = pandas.Series(dtypes, index=columns)
new_frame = cls.frame_cls(
np.array(partition_ids),
new_index,
columns,
row_lengths,
column_widths,
dtypes=dtypes,
)
new_frame.synchronize_labels(axis=0)
return cls.query_compiler_cls(new_frame)
```
###Assistant :
Read data from `path_or_buf` according to the passed `read_json` `kwargs` parameters.
Parameters
----------
path_or_buf : str, path object or file-like object
`path_or_buf` parameter of `read_json` function.
**kwargs : dict
Parameters of `read_json` function.
Returns
-------
BaseQueryCompiler
Query compiler with imported data for further processing.
|
956 | def stop_ambient_camera_rotation(self, about="theta"):
about: str = about.lower()
try:
if config.renderer == RendererType.CAIRO:
trackers = {
"theta": self.camera.theta_tracker,
"phi": self.camera.phi_tracker,
"gamma": self.camera.gamma_tracker,
}
x: ValueTracker = trackers[about]
x.clear_updaters()
self.remove(x)
elif config.renderer == RendererType.OPENGL:
self.camera.clear_updaters()
except Exception:
raise ValueError("Invalid ambient rotation angle.")
|
This method stops all ambient camera rotation.
| 7 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def stop_ambient_camera_rotation(self, about="theta"):
about: str = about.lower()
try:
if config.renderer == RendererType.CAIRO:
trackers = {
"theta": self.camera.theta_tracker,
"phi": self.camera.phi_tracker,
"gamma": self.camera.gamma_tracker,
}
x: ValueTracker = trackers[about]
x.clear_updaters()
self.remove(x)
elif config.renderer == RendererType.OPENGL:
self.camera.clear_updaters()
except Exception:
raise ValueError("Invalid ambient rotation angle.")
```
###Assistant :
This method stops all ambient camera rotation.
|
957 | def _get_bundled_aggregations(self) -> JsonDict:
# Fetch the bundled aggregations of the event.
channel = self.make_request(
"GET",
f"/_matrix/client/unstable/rooms/{self.room}/event/{self.parent_id}",
access_token=self.user_token,
)
self.assertEquals(200, channel.code, channel.json_body)
return channel.json_body["unsigned"].get("m.relations", {})
|
Requests /event on the parent ID and returns the m.relations field (from unsigned), if it exists.
| 16 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_bundled_aggregations(self) -> JsonDict:
# Fetch the bundled aggregations of the event.
channel = self.make_request(
"GET",
f"/_matrix/client/unstable/rooms/{self.room}/event/{self.parent_id}",
access_token=self.user_token,
)
self.assertEquals(200, channel.code, channel.json_body)
return channel.json_body["unsigned"].get("m.relations", {})
```
###Assistant :
Requests /event on the parent ID and returns the m.relations field (from unsigned), if it exists.
|
958 | def get_memos(self) -> Dict[bytes32, List[bytes]]:
memos: Dict[bytes32, List[bytes]] = {}
for coin_spend in self.coin_spends:
result = Program.from_bytes(bytes(coin_spend.puzzle_reveal)).run(
Program.from_bytes(bytes(coin_spend.solution))
)
for condition in result.as_python():
if condition[0] == ConditionOpcode.CREATE_COIN and len(condition) >= 4:
# If only 3 elements (opcode + 2 args), there is no memo, this is ph, amount
coin_added = Coin(coin_spend.coin.name(), bytes32(condition[1]), int_from_bytes(condition[2]))
if type(condition[3]) != list:
# If it's not a list, it's not the correct format
continue
memos[coin_added.name()] = condition[3]
return memos
# Note that `coin_spends` used to have the bad name `coin_solutions`.
# Some API still expects this name. For now, we accept both names.
#
# TODO: continue this deprecation. Eventually, all code below here should be removed.
# 1. set `exclude_modern_keys` to `False` (and manually set to `True` where necessary)
# 2. set `include_legacy_keys` to `False` (and manually set to `False` where necessary)
# 3. remove all references to `include_legacy_keys=True`
# 4. remove all code below this point
|
Retrieves the memos for additions in this spend_bundle, which are formatted as a list in the 3rd parameter of
CREATE_COIN. If there are no memos, the addition coin_id is not included. If they are not formatted as a list
of bytes, they are not included. This is expensive to call, it should not be used in full node code.
| 59 | 153 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_memos(self) -> Dict[bytes32, List[bytes]]:
memos: Dict[bytes32, List[bytes]] = {}
for coin_spend in self.coin_spends:
result = Program.from_bytes(bytes(coin_spend.puzzle_reveal)).run(
Program.from_bytes(bytes(coin_spend.solution))
)
for condition in result.as_python():
if condition[0] == ConditionOpcode.CREATE_COIN and len(condition) >= 4:
# If only 3 elements (opcode + 2 args), there is no memo, this is ph, amount
coin_added = Coin(coin_spend.coin.name(), bytes32(condition[1]), int_from_bytes(condition[2]))
if type(condition[3]) != list:
# If it's not a list, it's not the correct format
continue
memos[coin_added.name()] = condition[3]
return memos
# Note that `coin_spends` used to have the bad name `coin_solutions`.
# Some API still expects this name. For now, we accept both names.
#
# TODO: continue this deprecation. Eventually, all code below here should be removed.
# 1. set `exclude_modern_keys` to `False` (and manually set to `True` where necessary)
# 2. set `include_legacy_keys` to `False` (and manually set to `False` where necessary)
# 3. remove all references to `include_legacy_keys=True`
# 4. remove all code below this point
```
###Assistant :
Retrieves the memos for additions in this spend_bundle, which are formatted as a list in the 3rd parameter of
CREATE_COIN. If there are no memos, the addition coin_id is not included. If they are not formatted as a list
of bytes, they are not included. This is expensive to call, it should not be used in full node code.
|
959 | def _clean_url_path_part(part):
# type: (str) -> str
# We unquote prior to quoting to make sure nothing is double quoted.
return urllib.parse.quote(urllib.parse.unquote(part))
|
Clean a "part" of a URL path (i.e. after splitting on "@" characters).
| 13 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _clean_url_path_part(part):
# type: (str) -> str
# We unquote prior to quoting to make sure nothing is double quoted.
return urllib.parse.quote(urllib.parse.unquote(part))
```
###Assistant :
Clean a "part" of a URL path (i.e. after splitting on "@" characters).
|
960 | def CheckAltTokens(filename, clean_lines, linenum, error):
line = clean_lines.elided[linenum]
# Avoid preprocessor lines
if Match(r'^\s*#', line):
return
# Last ditch effort to avoid multi-line comments. This will not help
# if the comment started before the current line or ended after the
# current line, but it catches most of the false positives. At least,
# it provides a way to workaround this warning for people who use
# multi-line comments in preprocessor macros.
#
# TODO(unknown): remove this once cpplint has better support for
# multi-line comments.
if line.find('/*') >= 0 or line.find('*/') >= 0:
return
for match in _ALT_TOKEN_REPLACEMENT_PATTERN.finditer(line):
error(filename, linenum, 'readability/alt_tokens', 2,
'Use operator %s instead of %s' % (
_ALT_TOKEN_REPLACEMENT[match.group(1)], match.group(1)))
| Check alternative keywords being used in boolean expressions.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
| 40 | 114 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def CheckAltTokens(filename, clean_lines, linenum, error):
line = clean_lines.elided[linenum]
# Avoid preprocessor lines
if Match(r'^\s*#', line):
return
# Last ditch effort to avoid multi-line comments. This will not help
# if the comment started before the current line or ended after the
# current line, but it catches most of the false positives. At least,
# it provides a way to workaround this warning for people who use
# multi-line comments in preprocessor macros.
#
# TODO(unknown): remove this once cpplint has better support for
# multi-line comments.
if line.find('/*') >= 0 or line.find('*/') >= 0:
return
for match in _ALT_TOKEN_REPLACEMENT_PATTERN.finditer(line):
error(filename, linenum, 'readability/alt_tokens', 2,
'Use operator %s instead of %s' % (
_ALT_TOKEN_REPLACEMENT[match.group(1)], match.group(1)))
```
###Assistant : Check alternative keywords being used in boolean expressions.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
|
961 | def register_ray():
try:
from ray.util.joblib.ray_backend import RayBackend
register_parallel_backend("ray", RayBackend)
except ImportError:
msg = (
"To use the ray backend you must install ray."
"Try running 'pip install ray'."
"See https://docs.ray.io/en/master/installation.html"
"for more information."
)
raise ImportError(msg)
__all__ = ["register_ray"]
| Register Ray Backend to be called with parallel_backend("ray"). | 8 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def register_ray():
try:
from ray.util.joblib.ray_backend import RayBackend
register_parallel_backend("ray", RayBackend)
except ImportError:
msg = (
"To use the ray backend you must install ray."
"Try running 'pip install ray'."
"See https://docs.ray.io/en/master/installation.html"
"for more information."
)
raise ImportError(msg)
__all__ = ["register_ray"]
```
###Assistant : Register Ray Backend to be called with parallel_backend("ray").
|
962 | def get_group_permissions(self, user_obj, obj=None):
return self._get_permissions(user_obj, obj, "group")
|
Return a set of permission strings the user `user_obj` has from the
groups they belong.
| 15 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_group_permissions(self, user_obj, obj=None):
return self._get_permissions(user_obj, obj, "group")
```
###Assistant :
Return a set of permission strings the user `user_obj` has from the
groups they belong.
|
963 | def get_evaluation_sets(self) -> List[dict]:
return self.evaluation_set_client.get_evaluation_sets()
|
Returns a list of uploaded evaluation sets to deepset cloud.
:return: list of evaluation sets as dicts
These contain ("name", "evaluation_set_id", "created_at", "matched_labels", "total_labels") as fields.
| 26 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_evaluation_sets(self) -> List[dict]:
return self.evaluation_set_client.get_evaluation_sets()
```
###Assistant :
Returns a list of uploaded evaluation sets to deepset cloud.
:return: list of evaluation sets as dicts
These contain ("name", "evaluation_set_id", "created_at", "matched_labels", "total_labels") as fields.
|
964 | def allowlist_svg(dirty_xml):
from lxml.html import clean
allow_tags = [
'xml',
'svg',
'circle',
'ellipse',
'line',
'path',
'polygon',
'polyline',
'rect'
]
cleaner = clean.Cleaner(
allow_tags=allow_tags,
style=True,
links=True,
add_nofollow=False,
page_structure=True,
safe_attrs_only=False,
remove_unknown_tags=False)
clean_xml = cleaner.clean_html(dirty_xml)
return clean_xml
| Filter out malicious/harmful content from SVG files
by defining allowed tags
| 11 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def allowlist_svg(dirty_xml):
from lxml.html import clean
allow_tags = [
'xml',
'svg',
'circle',
'ellipse',
'line',
'path',
'polygon',
'polyline',
'rect'
]
cleaner = clean.Cleaner(
allow_tags=allow_tags,
style=True,
links=True,
add_nofollow=False,
page_structure=True,
safe_attrs_only=False,
remove_unknown_tags=False)
clean_xml = cleaner.clean_html(dirty_xml)
return clean_xml
```
###Assistant : Filter out malicious/harmful content from SVG files
by defining allowed tags
|
965 | def _sort_filelist(self) -> None:
for filename, image, alignments in self._iterator():
self.score_image(filename, image, alignments)
self.sort()
logger.debug("sorted list: %s",
[r[0] if isinstance(r, (tuple, list)) else r for r in self._result])
| Call the sort method's logic to populate the :attr:`_results` attribute.
Put logic for scoring an individual frame in in :attr:`score_image` of the child
Returns
-------
list
The sorted file. A list of tuples with the filename in the first position and score in
the second position
| 46 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _sort_filelist(self) -> None:
for filename, image, alignments in self._iterator():
self.score_image(filename, image, alignments)
self.sort()
logger.debug("sorted list: %s",
[r[0] if isinstance(r, (tuple, list)) else r for r in self._result])
```
###Assistant : Call the sort method's logic to populate the :attr:`_results` attribute.
Put logic for scoring an individual frame in in :attr:`score_image` of the child
Returns
-------
list
The sorted file. A list of tuples with the filename in the first position and score in
the second position
|
966 | def finalize_variable_values(self, var_list):
if self.use_ema:
# If the optimizer uses EMA, then when finalizing, we replace the model
# variable value with its moving average stored inside optimizer.
self._overwrite_model_variables_with_average_value(var_list)
| Set the final value of model's trainable variables.
Sometimes there are some extra steps before ending the variable updates,
such as overriding the model variables with its average value.
Args:
var_list: list of model variables.
| 35 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def finalize_variable_values(self, var_list):
if self.use_ema:
# If the optimizer uses EMA, then when finalizing, we replace the model
# variable value with its moving average stored inside optimizer.
self._overwrite_model_variables_with_average_value(var_list)
```
###Assistant : Set the final value of model's trainable variables.
Sometimes there are some extra steps before ending the variable updates,
such as overriding the model variables with its average value.
Args:
var_list: list of model variables.
|
967 | async def wait_floating_requests_end(self):
while self.total_num_floating_tasks_alive > 0:
await asyncio.sleep(0)
|
Await this coroutine to make sure that all the floating tasks that the request handler may bring are properly consumed
| 20 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def wait_floating_requests_end(self):
while self.total_num_floating_tasks_alive > 0:
await asyncio.sleep(0)
```
###Assistant :
Await this coroutine to make sure that all the floating tasks that the request handler may bring are properly consumed
|
968 | def _lu_impl(A, pivot=True, get_infos=False, out=None):
# type: (Tensor, bool, bool, Any) -> Tuple[Tensor, Tensor, Tensor]
r
# If get_infos is True, then we don't need to check for errors and vice versa
return torch._lu_with_info(A, pivot=pivot, check_errors=(not get_infos))
if TYPE_CHECKING:
_ListOrSeq = Sequence[Tensor]
else:
_ListOrSeq = List[Tensor]
| Computes the LU factorization of a matrix or batches of matrices
:attr:`A`. Returns a tuple containing the LU factorization and
pivots of :attr:`A`. Pivoting is done if :attr:`pivot` is set to
``True``.
.. note::
* The returned permutation matrix for every matrix in the batch is
represented by a 1-indexed vector of size ``min(A.shape[-2], A.shape[-1])``.
``pivots[i] == j`` represents that in the ``i``-th step of the algorithm,
the ``i``-th row was permuted with the ``j-1``-th row.
* LU factorization with :attr:`pivot` = ``False`` is not available
for CPU, and attempting to do so will throw an error. However,
LU factorization with :attr:`pivot` = ``False`` is available for
CUDA.
* This function does not check if the factorization was successful
or not if :attr:`get_infos` is ``True`` since the status of the
factorization is present in the third element of the return tuple.
* In the case of batches of square matrices with size less or equal
to 32 on a CUDA device, the LU factorization is repeated for
singular matrices due to the bug in the MAGMA library
(see magma issue 13).
* ``L``, ``U``, and ``P`` can be derived using :func:`torch.lu_unpack`.
.. warning::
The gradients of this function will only be finite when :attr:`A` is full rank.
This is because the LU decomposition is just differentiable at full rank matrices.
Furthermore, if :attr:`A` is close to not being full rank,
the gradient will be numerically unstable as it depends on the computation of :math:`L^{-1}` and :math:`U^{-1}`.
Args:
A (Tensor): the tensor to factor of size :math:`(*, m, n)`
pivot (bool, optional): controls whether pivoting is done. Default: ``True``
get_infos (bool, optional): if set to ``True``, returns an info IntTensor.
Default: ``False``
out (tuple, optional): optional output tuple. If :attr:`get_infos` is ``True``,
then the elements in the tuple are Tensor, IntTensor,
and IntTensor. If :attr:`get_infos` is ``False``, then the
elements in the tuple are Tensor, IntTensor. Default: ``None``
Returns:
(Tensor, IntTensor, IntTensor (optional)): A tuple of tensors containing
- **factorization** (*Tensor*): the factorization of size :math:`(*, m, n)`
- **pivots** (*IntTensor*): the pivots of size :math:`(*, \text{min}(m, n))`.
``pivots`` stores all the intermediate transpositions of rows.
The final permutation ``perm`` could be reconstructed by
applying ``swap(perm[i], perm[pivots[i] - 1])`` for ``i = 0, ..., pivots.size(-1) - 1``,
where ``perm`` is initially the identity permutation of :math:`m` elements
(essentially this is what :func:`torch.lu_unpack` is doing).
- **infos** (*IntTensor*, *optional*): if :attr:`get_infos` is ``True``, this is a tensor of
size :math:`(*)` where non-zero values indicate whether factorization for the matrix or
each minibatch has succeeded or failed
Example::
>>> A = torch.randn(2, 3, 3)
>>> A_LU, pivots = torch.lu(A)
>>> A_LU
tensor([[[ 1.3506, 2.5558, -0.0816],
[ 0.1684, 1.1551, 0.1940],
[ 0.1193, 0.6189, -0.5497]],
[[ 0.4526, 1.2526, -0.3285],
[-0.7988, 0.7175, -0.9701],
[ 0.2634, -0.9255, -0.3459]]])
>>> pivots
tensor([[ 3, 3, 3],
[ 3, 3, 3]], dtype=torch.int32)
>>> A_LU, pivots, info = torch.lu(A, get_infos=True)
>>> if info.nonzero().size(0) == 0:
... print('LU factorization succeeded for all samples!')
LU factorization succeeded for all samples!
| 497 | 46 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _lu_impl(A, pivot=True, get_infos=False, out=None):
# type: (Tensor, bool, bool, Any) -> Tuple[Tensor, Tensor, Tensor]
r
# If get_infos is True, then we don't need to check for errors and vice versa
return torch._lu_with_info(A, pivot=pivot, check_errors=(not get_infos))
if TYPE_CHECKING:
_ListOrSeq = Sequence[Tensor]
else:
_ListOrSeq = List[Tensor]
```
###Assistant : Computes the LU factorization of a matrix or batches of matrices
:attr:`A`. Returns a tuple containing the LU factorization and
pivots of :attr:`A`. Pivoting is done if :attr:`pivot` is set to
``True``.
.. note::
* The returned permutation matrix for every matrix in the batch is
represented by a 1-indexed vector of size ``min(A.shape[-2], A.shape[-1])``.
``pivots[i] == j`` represents that in the ``i``-th step of the algorithm,
the ``i``-th row was permuted with the ``j-1``-th row.
* LU factorization with :attr:`pivot` = ``False`` is not available
for CPU, and attempting to do so will throw an error. However,
LU factorization with :attr:`pivot` = ``False`` is available for
CUDA.
* This function does not check if the factorization was successful
or not if :attr:`get_infos` is ``True`` since the status of the
factorization is present in the third element of the return tuple.
* In the case of batches of square matrices with size less or equal
to 32 on a CUDA device, the LU factorization is repeated for
singular matrices due to the bug in the MAGMA library
(see magma issue 13).
* ``L``, ``U``, and ``P`` can be derived using :func:`torch.lu_unpack`.
.. warning::
The gradients of this function will only be finite when :attr:`A` is full rank.
This is because the LU decomposition is just differentiable at full rank matrices.
Furthermore, if :attr:`A` is close to not being full rank,
the gradient will be numerically unstable as it depends on the computation of :math:`L^{-1}` and :math:`U^{-1}`.
Args:
A (Tensor): the tensor to factor of size :math:`(*, m, n)`
pivot (bool, optional): controls whether pivoting is done. Default: ``True``
get_infos (bool, optional): if set to ``True``, returns an info IntTensor.
Default: ``False``
out (tuple, optional): optional output tuple. If :attr:`get_infos` is ``True``,
then the elements in the tuple are Tensor, IntTensor,
and IntTensor. If :attr:`get_infos` is ``False``, then the
elements in the tuple are Tensor, IntTensor. Default: ``None``
Returns:
(Tensor, IntTensor, IntTensor (optional)): A tuple of tensors containing
- **factorization** (*Tensor*): the factorization of size :math:`(*, m, n)`
- **pivots** (*IntTensor*): the pivots of size :math:`(*, \text{min}(m, n))`.
``pivots`` stores all the intermediate transpositions of rows.
The final permutation ``perm`` could be reconstructed by
applying ``swap(perm[i], perm[pivots[i] - 1])`` for ``i = 0, ..., pivots.size(-1) - 1``,
where ``perm`` is initially the identity permutation of :math:`m` elements
(essentially this is what :func:`torch.lu_unpack` is doing).
- **infos** (*IntTensor*, *optional*): if :attr:`get_infos` is ``True``, this is a tensor of
size :math:`(*)` where non-zero values indicate whether factorization for the matrix or
each minibatch has succeeded or failed
Example::
>>> A = torch.randn(2, 3, 3)
>>> A_LU, pivots = torch.lu(A)
>>> A_LU
tensor([[[ 1.3506, 2.5558, -0.0816],
[ 0.1684, 1.1551, 0.1940],
[ 0.1193, 0.6189, -0.5497]],
[[ 0.4526, 1.2526, -0.3285],
[-0.7988, 0.7175, -0.9701],
[ 0.2634, -0.9255, -0.3459]]])
>>> pivots
tensor([[ 3, 3, 3],
[ 3, 3, 3]], dtype=torch.int32)
>>> A_LU, pivots, info = torch.lu(A, get_infos=True)
>>> if info.nonzero().size(0) == 0:
... print('LU factorization succeeded for all samples!')
LU factorization succeeded for all samples!
|
969 | def test_new_configs_appservice_worker(self) -> None:
appservice_worker_config = self._make_worker_config(
worker_app="synapse.app.generic_worker", worker_name="worker1"
)
self.assertTrue(
appservice_worker_config._should_this_worker_perform_duty(
{
"notify_appservices_from_worker": "worker1",
},
"notify_appservices",
"synapse.app.appservice",
"notify_appservices_from_worker",
)
)
self.assertFalse(
appservice_worker_config._should_this_worker_perform_duty(
{
"notify_appservices_from_worker": "worker2",
},
"notify_appservices",
"synapse.app.appservice",
"notify_appservices_from_worker",
)
)
|
Tests new config options. This is for the worker's config.
| 10 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_new_configs_appservice_worker(self) -> None:
appservice_worker_config = self._make_worker_config(
worker_app="synapse.app.generic_worker", worker_name="worker1"
)
self.assertTrue(
appservice_worker_config._should_this_worker_perform_duty(
{
"notify_appservices_from_worker": "worker1",
},
"notify_appservices",
"synapse.app.appservice",
"notify_appservices_from_worker",
)
)
self.assertFalse(
appservice_worker_config._should_this_worker_perform_duty(
{
"notify_appservices_from_worker": "worker2",
},
"notify_appservices",
"synapse.app.appservice",
"notify_appservices_from_worker",
)
)
```
###Assistant :
Tests new config options. This is for the worker's config.
|
970 | def E_nl(n, Z=1):
n, Z = S(n), S(Z)
if n.is_integer and (n < 1):
raise ValueError("'n' must be positive integer")
return -Z**2/(2*n**2)
|
Returns the energy of the state (n, l) in Hartree atomic units.
The energy does not depend on "l".
Parameters
==========
n : integer
Principal Quantum Number which is
an integer with possible values as 1, 2, 3, 4,...
Z :
Atomic number (1 for Hydrogen, 2 for Helium, ...)
Examples
========
>>> from sympy.physics.hydrogen import E_nl
>>> from sympy.abc import n, Z
>>> E_nl(n, Z)
-Z**2/(2*n**2)
>>> E_nl(1)
-1/2
>>> E_nl(2)
-1/8
>>> E_nl(3)
-1/18
>>> E_nl(3, 47)
-2209/18
| 80 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def E_nl(n, Z=1):
n, Z = S(n), S(Z)
if n.is_integer and (n < 1):
raise ValueError("'n' must be positive integer")
return -Z**2/(2*n**2)
```
###Assistant :
Returns the energy of the state (n, l) in Hartree atomic units.
The energy does not depend on "l".
Parameters
==========
n : integer
Principal Quantum Number which is
an integer with possible values as 1, 2, 3, 4,...
Z :
Atomic number (1 for Hydrogen, 2 for Helium, ...)
Examples
========
>>> from sympy.physics.hydrogen import E_nl
>>> from sympy.abc import n, Z
>>> E_nl(n, Z)
-Z**2/(2*n**2)
>>> E_nl(1)
-1/2
>>> E_nl(2)
-1/8
>>> E_nl(3)
-1/18
>>> E_nl(3, 47)
-2209/18
|
971 | def test_model_checkpoint_no_extraneous_invocations(tmpdir):
model = LogInTwoMethods()
num_epochs = 4
model_checkpoint = ModelCheckpointTestInvocations(monitor="early_stop_on", expected_count=num_epochs, save_top_k=-1)
trainer = Trainer(
strategy="ddp_spawn",
accelerator="cpu",
devices=2,
default_root_dir=tmpdir,
callbacks=[model_checkpoint],
max_epochs=num_epochs,
)
trainer.fit(model)
assert trainer.state.finished, f"Training failed with {trainer.state}"
| Test to ensure that the model callback saves the checkpoints only once in distributed mode. | 15 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_model_checkpoint_no_extraneous_invocations(tmpdir):
model = LogInTwoMethods()
num_epochs = 4
model_checkpoint = ModelCheckpointTestInvocations(monitor="early_stop_on", expected_count=num_epochs, save_top_k=-1)
trainer = Trainer(
strategy="ddp_spawn",
accelerator="cpu",
devices=2,
default_root_dir=tmpdir,
callbacks=[model_checkpoint],
max_epochs=num_epochs,
)
trainer.fit(model)
assert trainer.state.finished, f"Training failed with {trainer.state}"
```
###Assistant : Test to ensure that the model callback saves the checkpoints only once in distributed mode.
|
972 | def find_module(self, fullname, path):
warnings.warn("MetaPathFinder.find_module() is deprecated since Python "
"3.4 in favor of MetaPathFinder.find_spec() and is "
"slated for removal in Python 3.12",
DeprecationWarning,
stacklevel=2)
if not hasattr(self, 'find_spec'):
return None
found = self.find_spec(fullname, path)
return found.loader if found is not None else None
| Return a loader for the module.
If no module is found, return None. The fullname is a str and
the path is a list of strings or None.
This method is deprecated since Python 3.4 in favor of
finder.find_spec(). If find_spec() exists then backwards-compatible
functionality is provided for this method.
| 50 | 45 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def find_module(self, fullname, path):
warnings.warn("MetaPathFinder.find_module() is deprecated since Python "
"3.4 in favor of MetaPathFinder.find_spec() and is "
"slated for removal in Python 3.12",
DeprecationWarning,
stacklevel=2)
if not hasattr(self, 'find_spec'):
return None
found = self.find_spec(fullname, path)
return found.loader if found is not None else None
```
###Assistant : Return a loader for the module.
If no module is found, return None. The fullname is a str and
the path is a list of strings or None.
This method is deprecated since Python 3.4 in favor of
finder.find_spec(). If find_spec() exists then backwards-compatible
functionality is provided for this method.
|
973 | def _generate_individual(self, parameter_id):
pos = -1
for i in range(len(self.population)):
if self.population[i].result is None:
pos = i
break
if pos != -1:
indiv = copy.deepcopy(self.population[pos])
self.population.pop(pos)
else:
random.shuffle(self.population)
# avoid only 1 individual has result
if len(self.population) > 1 and self.population[0].result < self.population[1].result:
self.population[0] = self.population[1]
# mutation on the worse individual
space = json2space(self.searchspace_json,
self.population[0].config)
is_rand = dict()
mutation_pos = space[random.randint(0, len(space)-1)]
for i in range(len(self.space)):
is_rand[self.space[i]] = (self.space[i] == mutation_pos)
config = json2parameter(
self.searchspace_json, is_rand, self.random_state, self.population[0].config)
if len(self.population) > 1:
self.population.pop(1)
indiv = Individual(config=config)
# remove "_index" from config and save params-id
self.running_trials[parameter_id] = indiv
config = split_index(indiv.config)
return config
|
This function will generate the config for a trial.
If at the first generation, randomly generates individuals to satisfy self.population_size.
Otherwise, random choose a pair of individuals and compare their fitnesses.
The worst of the pair will be removed. Copy the best of the pair and mutate it to generate a new individual.
Parameters
----------
parameter_id : int
Returns
-------
dict
A group of candidate parameters that evolution tuner generated.
| 70 | 103 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _generate_individual(self, parameter_id):
pos = -1
for i in range(len(self.population)):
if self.population[i].result is None:
pos = i
break
if pos != -1:
indiv = copy.deepcopy(self.population[pos])
self.population.pop(pos)
else:
random.shuffle(self.population)
# avoid only 1 individual has result
if len(self.population) > 1 and self.population[0].result < self.population[1].result:
self.population[0] = self.population[1]
# mutation on the worse individual
space = json2space(self.searchspace_json,
self.population[0].config)
is_rand = dict()
mutation_pos = space[random.randint(0, len(space)-1)]
for i in range(len(self.space)):
is_rand[self.space[i]] = (self.space[i] == mutation_pos)
config = json2parameter(
self.searchspace_json, is_rand, self.random_state, self.population[0].config)
if len(self.population) > 1:
self.population.pop(1)
indiv = Individual(config=config)
# remove "_index" from config and save params-id
self.running_trials[parameter_id] = indiv
config = split_index(indiv.config)
return config
```
###Assistant :
This function will generate the config for a trial.
If at the first generation, randomly generates individuals to satisfy self.population_size.
Otherwise, random choose a pair of individuals and compare their fitnesses.
The worst of the pair will be removed. Copy the best of the pair and mutate it to generate a new individual.
Parameters
----------
parameter_id : int
Returns
-------
dict
A group of candidate parameters that evolution tuner generated.
|
974 | async def connect(self):
connection = {"client_id": self.client_id, "websocket": self.websocket}
logging.info(f"Connecting WebSocket: {connection}")
await self.websocket.accept()
WSProgressHandler.instances.append(self)
|
Called when a new client connects to the websocket.
| 9 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def connect(self):
connection = {"client_id": self.client_id, "websocket": self.websocket}
logging.info(f"Connecting WebSocket: {connection}")
await self.websocket.accept()
WSProgressHandler.instances.append(self)
```
###Assistant :
Called when a new client connects to the websocket.
|
975 | def prepare_test_img(self, idx):
img_info = self.data_infos[idx]
results = dict(img_info=img_info)
if self.proposals is not None:
results['proposals'] = self.proposals[idx]
self.pre_pipeline(results)
return self.pipeline(results)
| Get testing data after pipeline.
Args:
idx (int): Index of data.
Returns:
dict: Testing data after pipeline with new keys introduced by \
pipeline.
| 24 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def prepare_test_img(self, idx):
img_info = self.data_infos[idx]
results = dict(img_info=img_info)
if self.proposals is not None:
results['proposals'] = self.proposals[idx]
self.pre_pipeline(results)
return self.pipeline(results)
```
###Assistant : Get testing data after pipeline.
Args:
idx (int): Index of data.
Returns:
dict: Testing data after pipeline with new keys introduced by \
pipeline.
|
976 | def get_data(filters):
data = []
conditions = get_conditions(filters)
salary_slips = frappe.db.sql(
% (conditions),
as_dict=1,
)
component_type_dict = frappe._dict(
frappe.db.sql(
)
)
if not len(component_type_dict):
return []
entry = frappe.db.sql(
% (conditions, ", ".join(["%s"] * len(component_type_dict))),
tuple(component_type_dict.keys()),
as_dict=1,
)
data_list = prepare_data(entry, component_type_dict)
for d in salary_slips:
total = 0
if data_list.get(d.name):
employee = {
"employee": data_list.get(d.name).get("employee"),
"employee_name": data_list.get(d.name).get("employee_name"),
"pf_account": data_list.get(d.name).get("pf_account"),
}
if data_list.get(d.name).get("Provident Fund"):
employee["pf_amount"] = data_list.get(d.name).get("Provident Fund")
total += data_list.get(d.name).get("Provident Fund")
if data_list.get(d.name).get("Additional Provident Fund"):
employee["additional_pf"] = data_list.get(d.name).get("Additional Provident Fund")
total += data_list.get(d.name).get("Additional Provident Fund")
if data_list.get(d.name).get("Provident Fund Loan"):
employee["pf_loan"] = data_list.get(d.name).get("Provident Fund Loan")
total += data_list.get(d.name).get("Provident Fund Loan")
employee["total"] = total
data.append(employee)
return data
@frappe.whitelist() | select sal.name from `tabSalary Slip` sal
where docstatus = 1 %s
select name, component_type from `tabSalary Component`
where component_type in ('Provident Fund', 'Additional Provident Fund', 'Provident Fund Loan') select sal.name, sal.employee, sal.employee_name, ded.salary_component, ded.amount
from `tabSalary Slip` sal, `tabSalary Detail` ded
where sal.name = ded.parent
and ded.parentfield = 'deductions'
and ded.parenttype = 'Salary Slip'
and sal.docstatus = 1 %s
and ded.salary_component in (%s)
| 63 | 107 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_data(filters):
data = []
conditions = get_conditions(filters)
salary_slips = frappe.db.sql(
% (conditions),
as_dict=1,
)
component_type_dict = frappe._dict(
frappe.db.sql(
)
)
if not len(component_type_dict):
return []
entry = frappe.db.sql(
% (conditions, ", ".join(["%s"] * len(component_type_dict))),
tuple(component_type_dict.keys()),
as_dict=1,
)
data_list = prepare_data(entry, component_type_dict)
for d in salary_slips:
total = 0
if data_list.get(d.name):
employee = {
"employee": data_list.get(d.name).get("employee"),
"employee_name": data_list.get(d.name).get("employee_name"),
"pf_account": data_list.get(d.name).get("pf_account"),
}
if data_list.get(d.name).get("Provident Fund"):
employee["pf_amount"] = data_list.get(d.name).get("Provident Fund")
total += data_list.get(d.name).get("Provident Fund")
if data_list.get(d.name).get("Additional Provident Fund"):
employee["additional_pf"] = data_list.get(d.name).get("Additional Provident Fund")
total += data_list.get(d.name).get("Additional Provident Fund")
if data_list.get(d.name).get("Provident Fund Loan"):
employee["pf_loan"] = data_list.get(d.name).get("Provident Fund Loan")
total += data_list.get(d.name).get("Provident Fund Loan")
employee["total"] = total
data.append(employee)
return data
@frappe.whitelist()
```
###Assistant : select sal.name from `tabSalary Slip` sal
where docstatus = 1 %s
select name, component_type from `tabSalary Component`
where component_type in ('Provident Fund', 'Additional Provident Fund', 'Provident Fund Loan') select sal.name, sal.employee, sal.employee_name, ded.salary_component, ded.amount
from `tabSalary Slip` sal, `tabSalary Detail` ded
where sal.name = ded.parent
and ded.parentfield = 'deductions'
and ded.parenttype = 'Salary Slip'
and sal.docstatus = 1 %s
and ded.salary_component in (%s)
|
977 | def as_dict(self) -> dict[str, Any]:
return {
"extended_dict": self.as_extended_dict(),
"short_dict": self.as_short_dict(),
}
| Return an dictionary version of this ActionTrace for saving. | 9 | 12 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def as_dict(self) -> dict[str, Any]:
return {
"extended_dict": self.as_extended_dict(),
"short_dict": self.as_short_dict(),
}
```
###Assistant : Return an dictionary version of this ActionTrace for saving.
|
978 | def deserialize(config, custom_objects=None, **kwargs):
# loss_scale_optimizer has a direct dependency of optimizer, import here
# rather than top to avoid the cyclic dependency.
from keras.mixed_precision import (
loss_scale_optimizer,
)
use_legacy_optimizer = kwargs.pop("use_legacy_optimizer", False)
if len(config["config"]) > 0:
# If the optimizer config is not empty, then we use the value of
# `is_legacy_optimizer` to override `use_legacy_optimizer`. If
# `is_legacy_optimizer` does not exist in config, it means we are
# using the legacy optimzier.
use_legacy_optimizer = config["config"].get("is_legacy_optimizer", True)
if (
tf.__internal__.tf2.enabled()
and tf.executing_eagerly()
and not use_legacy_optimizer
):
all_classes = {
"adadelta": adadelta_experimental.Adadelta,
"adagrad": adagrad_experimental.Adagrad,
"adam": adam_experimental.Adam,
"adamax": adamax_experimental.Adamax,
"experimentaladadelta": adadelta_experimental.Adadelta,
"experimentaladagrad": adagrad_experimental.Adagrad,
"experimentaladam": adam_experimental.Adam,
"experimentalsgd": sgd_experimental.SGD,
"nadam": nadam_experimental.Nadam,
"rmsprop": rmsprop_experimental.RMSprop,
"sgd": sgd_experimental.SGD,
"ftrl": ftrl_experimental.Ftrl,
"lossscaleoptimizer": loss_scale_optimizer.LossScaleOptimizerV3,
"lossscaleoptimizerv3": loss_scale_optimizer.LossScaleOptimizerV3,
# LossScaleOptimizerV1 was an old version of LSO that was removed.
# Deserializing it turns it into a LossScaleOptimizer
"lossscaleoptimizerv1": loss_scale_optimizer.LossScaleOptimizer,
}
else:
all_classes = {
"adadelta": adadelta_v2.Adadelta,
"adagrad": adagrad_v2.Adagrad,
"adam": adam_v2.Adam,
"adamax": adamax_v2.Adamax,
"experimentaladadelta": adadelta_experimental.Adadelta,
"experimentaladagrad": adagrad_experimental.Adagrad,
"experimentaladam": adam_experimental.Adam,
"experimentalsgd": sgd_experimental.SGD,
"nadam": nadam_v2.Nadam,
"rmsprop": rmsprop_v2.RMSprop,
"sgd": gradient_descent_v2.SGD,
"ftrl": ftrl_v2.Ftrl,
"lossscaleoptimizer": loss_scale_optimizer.LossScaleOptimizer,
"lossscaleoptimizerv3": loss_scale_optimizer.LossScaleOptimizerV3,
# LossScaleOptimizerV1 was an old version of LSO that was removed.
# Deserializing it turns it into a LossScaleOptimizer
"lossscaleoptimizerv1": loss_scale_optimizer.LossScaleOptimizer,
}
# Make deserialization case-insensitive for built-in optimizers.
if config["class_name"].lower() in all_classes:
config["class_name"] = config["class_name"].lower()
return deserialize_keras_object(
config,
module_objects=all_classes,
custom_objects=custom_objects,
printable_module_name="optimizer",
)
@keras_export(
"keras.__internal__.optimizers.convert_to_legacy_optimizer", v1=[]
) | Inverse of the `serialize` function.
Args:
config: Optimizer configuration dictionary.
custom_objects: Optional dictionary mapping names (strings) to custom
objects (classes and functions) to be considered during
deserialization.
Returns:
A Keras Optimizer instance.
| 32 | 218 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def deserialize(config, custom_objects=None, **kwargs):
# loss_scale_optimizer has a direct dependency of optimizer, import here
# rather than top to avoid the cyclic dependency.
from keras.mixed_precision import (
loss_scale_optimizer,
)
use_legacy_optimizer = kwargs.pop("use_legacy_optimizer", False)
if len(config["config"]) > 0:
# If the optimizer config is not empty, then we use the value of
# `is_legacy_optimizer` to override `use_legacy_optimizer`. If
# `is_legacy_optimizer` does not exist in config, it means we are
# using the legacy optimzier.
use_legacy_optimizer = config["config"].get("is_legacy_optimizer", True)
if (
tf.__internal__.tf2.enabled()
and tf.executing_eagerly()
and not use_legacy_optimizer
):
all_classes = {
"adadelta": adadelta_experimental.Adadelta,
"adagrad": adagrad_experimental.Adagrad,
"adam": adam_experimental.Adam,
"adamax": adamax_experimental.Adamax,
"experimentaladadelta": adadelta_experimental.Adadelta,
"experimentaladagrad": adagrad_experimental.Adagrad,
"experimentaladam": adam_experimental.Adam,
"experimentalsgd": sgd_experimental.SGD,
"nadam": nadam_experimental.Nadam,
"rmsprop": rmsprop_experimental.RMSprop,
"sgd": sgd_experimental.SGD,
"ftrl": ftrl_experimental.Ftrl,
"lossscaleoptimizer": loss_scale_optimizer.LossScaleOptimizerV3,
"lossscaleoptimizerv3": loss_scale_optimizer.LossScaleOptimizerV3,
# LossScaleOptimizerV1 was an old version of LSO that was removed.
# Deserializing it turns it into a LossScaleOptimizer
"lossscaleoptimizerv1": loss_scale_optimizer.LossScaleOptimizer,
}
else:
all_classes = {
"adadelta": adadelta_v2.Adadelta,
"adagrad": adagrad_v2.Adagrad,
"adam": adam_v2.Adam,
"adamax": adamax_v2.Adamax,
"experimentaladadelta": adadelta_experimental.Adadelta,
"experimentaladagrad": adagrad_experimental.Adagrad,
"experimentaladam": adam_experimental.Adam,
"experimentalsgd": sgd_experimental.SGD,
"nadam": nadam_v2.Nadam,
"rmsprop": rmsprop_v2.RMSprop,
"sgd": gradient_descent_v2.SGD,
"ftrl": ftrl_v2.Ftrl,
"lossscaleoptimizer": loss_scale_optimizer.LossScaleOptimizer,
"lossscaleoptimizerv3": loss_scale_optimizer.LossScaleOptimizerV3,
# LossScaleOptimizerV1 was an old version of LSO that was removed.
# Deserializing it turns it into a LossScaleOptimizer
"lossscaleoptimizerv1": loss_scale_optimizer.LossScaleOptimizer,
}
# Make deserialization case-insensitive for built-in optimizers.
if config["class_name"].lower() in all_classes:
config["class_name"] = config["class_name"].lower()
return deserialize_keras_object(
config,
module_objects=all_classes,
custom_objects=custom_objects,
printable_module_name="optimizer",
)
@keras_export(
"keras.__internal__.optimizers.convert_to_legacy_optimizer", v1=[]
)
```
###Assistant : Inverse of the `serialize` function.
Args:
config: Optimizer configuration dictionary.
custom_objects: Optional dictionary mapping names (strings) to custom
objects (classes and functions) to be considered during
deserialization.
Returns:
A Keras Optimizer instance.
|
979 | def multiply(self, a, b):
a = _convert_other(a, raiseit=True)
r = a.__mul__(b, context=self)
if r is NotImplemented:
raise TypeError("Unable to convert %s to Decimal" % b)
else:
return r
| multiply multiplies two operands.
If either operand is a special value then the general rules apply.
Otherwise, the operands are multiplied together
('long multiplication'), resulting in a number which may be as long as
the sum of the lengths of the two operands.
>>> ExtendedContext.multiply(Decimal('1.20'), Decimal('3'))
Decimal('3.60')
>>> ExtendedContext.multiply(Decimal('7'), Decimal('3'))
Decimal('21')
>>> ExtendedContext.multiply(Decimal('0.9'), Decimal('0.8'))
Decimal('0.72')
>>> ExtendedContext.multiply(Decimal('0.9'), Decimal('-0'))
Decimal('-0.0')
>>> ExtendedContext.multiply(Decimal('654321'), Decimal('654321'))
Decimal('4.28135971E+11')
>>> ExtendedContext.multiply(7, 7)
Decimal('49')
>>> ExtendedContext.multiply(Decimal(7), 7)
Decimal('49')
>>> ExtendedContext.multiply(7, Decimal(7))
Decimal('49')
| 75 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def multiply(self, a, b):
a = _convert_other(a, raiseit=True)
r = a.__mul__(b, context=self)
if r is NotImplemented:
raise TypeError("Unable to convert %s to Decimal" % b)
else:
return r
```
###Assistant : multiply multiplies two operands.
If either operand is a special value then the general rules apply.
Otherwise, the operands are multiplied together
('long multiplication'), resulting in a number which may be as long as
the sum of the lengths of the two operands.
>>> ExtendedContext.multiply(Decimal('1.20'), Decimal('3'))
Decimal('3.60')
>>> ExtendedContext.multiply(Decimal('7'), Decimal('3'))
Decimal('21')
>>> ExtendedContext.multiply(Decimal('0.9'), Decimal('0.8'))
Decimal('0.72')
>>> ExtendedContext.multiply(Decimal('0.9'), Decimal('-0'))
Decimal('-0.0')
>>> ExtendedContext.multiply(Decimal('654321'), Decimal('654321'))
Decimal('4.28135971E+11')
>>> ExtendedContext.multiply(7, 7)
Decimal('49')
>>> ExtendedContext.multiply(Decimal(7), 7)
Decimal('49')
>>> ExtendedContext.multiply(7, Decimal(7))
Decimal('49')
|
980 | def test_run_cleanup_skip_archive(self, cleanup_table_mock, kwargs, should_skip):
run_cleanup(
clean_before_timestamp=None,
table_names=['log'],
dry_run=None,
verbose=None,
confirm=False,
**kwargs,
)
assert cleanup_table_mock.call_args[1]['skip_archive'] is should_skip
| test that delete confirmation input is called when appropriate | 9 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_run_cleanup_skip_archive(self, cleanup_table_mock, kwargs, should_skip):
run_cleanup(
clean_before_timestamp=None,
table_names=['log'],
dry_run=None,
verbose=None,
confirm=False,
**kwargs,
)
assert cleanup_table_mock.call_args[1]['skip_archive'] is should_skip
```
###Assistant : test that delete confirmation input is called when appropriate
|
981 | def process(self) -> None:
if not self.is_downloaded():
self.download()
self.process_downloaded_dataset()
| Process the dataset into a dataframe and save it at self.processed_dataset_path. | 11 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def process(self) -> None:
if not self.is_downloaded():
self.download()
self.process_downloaded_dataset()
```
###Assistant : Process the dataset into a dataframe and save it at self.processed_dataset_path.
|
982 | def test_healthy_only_works_for_list_of_functions(self):
actors = [Actor.remote(i) for i in range(4)]
manager = FaultTolerantActorManager(actors=actors)
# Mark first and second actor as unhealthy.
manager.set_actor_state(1, False)
manager.set_actor_state(2, False)
| Test healthy only mode works when a list of funcs are provided. | 12 | 24 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_healthy_only_works_for_list_of_functions(self):
actors = [Actor.remote(i) for i in range(4)]
manager = FaultTolerantActorManager(actors=actors)
# Mark first and second actor as unhealthy.
manager.set_actor_state(1, False)
manager.set_actor_state(2, False)
```
###Assistant : Test healthy only mode works when a list of funcs are provided.
|
983 | def test_unassignment(self, mock_func):
notification = UnassignedActivityNotification(
Activity(
project=self.project,
group=self.group,
user=self.user,
type=ActivityType.ASSIGNED,
data={"assignee": ""},
)
)
with self.tasks():
notification.send()
attachment, text = get_attachment()
assert text == f"Issue unassigned by {self.name}"
assert attachment["title"] == self.group.title
assert (
attachment["footer"]
== f"{self.project.slug} | <http://testserver/settings/account/notifications/workflow/?referrer=unassigned_activity-slack-user|Notification Settings>"
)
|
Test that a Slack message is sent with the expected payload when an issue is unassigned
| 16 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_unassignment(self, mock_func):
notification = UnassignedActivityNotification(
Activity(
project=self.project,
group=self.group,
user=self.user,
type=ActivityType.ASSIGNED,
data={"assignee": ""},
)
)
with self.tasks():
notification.send()
attachment, text = get_attachment()
assert text == f"Issue unassigned by {self.name}"
assert attachment["title"] == self.group.title
assert (
attachment["footer"]
== f"{self.project.slug} | <http://testserver/settings/account/notifications/workflow/?referrer=unassigned_activity-slack-user|Notification Settings>"
)
```
###Assistant :
Test that a Slack message is sent with the expected payload when an issue is unassigned
|
984 | def print_help(self):
has_ticker_start = "[unvl]" if not self.ticker else ""
has_ticker_end = "[/unvl]" if not self.ticker else ""
help_text = f
console.print(text=help_text, menu="Stocks - Insider Trading")
| Print help[cmds]
view view available presets
set set one of the available presets[/cmds]
[param]PRESET: [/param]{self.preset}[cmds]
filter filter insiders based on preset [src][Open Insider][/src]
load load a specific stock ticker for analysis[/cmds]
{has_ticker_start}
[param]Ticker: [/param]{self.ticker}
stats insider stats of the company [src][Open Insider][/src]
act insider activity over time [src][Business Insider][/src]
lins last insider trading of the company [src][Finviz][/src]
{has_ticker_end}
[info]Latest Insiders[/info] [src][Open Insider][/src][cmds]
lcb latest cluster boys
lpsb latest penny stock buys
lit latest insider trading (all filings)
lip latest insider purchases
blip big latest insider purchases ($25k+)
blop big latest officer purchases ($25k+)
blcp big latest CEO/CFO purchases ($25k+)
lis latest insider sales
blis big latest insider sales ($100k+)
blos big latest officer sales ($100k+)
blcs big latest CEO/CFO sales ($100k+)
[info]Top Insiders [src][Open Insider][/src][/info]
topt top officer purchases today
toppw top officer purchases past week
toppm top officer purchases past month
tipt top insider purchases today
tippw top insider purchases past week
tippm top insider purchases past month
tist top insider sales today
tispw top insider sales past week
tispm top insider sales past month[/cmds]
| 176 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def print_help(self):
has_ticker_start = "[unvl]" if not self.ticker else ""
has_ticker_end = "[/unvl]" if not self.ticker else ""
help_text = f
console.print(text=help_text, menu="Stocks - Insider Trading")
```
###Assistant : Print help[cmds]
view view available presets
set set one of the available presets[/cmds]
[param]PRESET: [/param]{self.preset}[cmds]
filter filter insiders based on preset [src][Open Insider][/src]
load load a specific stock ticker for analysis[/cmds]
{has_ticker_start}
[param]Ticker: [/param]{self.ticker}
stats insider stats of the company [src][Open Insider][/src]
act insider activity over time [src][Business Insider][/src]
lins last insider trading of the company [src][Finviz][/src]
{has_ticker_end}
[info]Latest Insiders[/info] [src][Open Insider][/src][cmds]
lcb latest cluster boys
lpsb latest penny stock buys
lit latest insider trading (all filings)
lip latest insider purchases
blip big latest insider purchases ($25k+)
blop big latest officer purchases ($25k+)
blcp big latest CEO/CFO purchases ($25k+)
lis latest insider sales
blis big latest insider sales ($100k+)
blos big latest officer sales ($100k+)
blcs big latest CEO/CFO sales ($100k+)
[info]Top Insiders [src][Open Insider][/src][/info]
topt top officer purchases today
toppw top officer purchases past week
toppm top officer purchases past month
tipt top insider purchases today
tippw top insider purchases past week
tippm top insider purchases past month
tist top insider sales today
tispw top insider sales past week
tispm top insider sales past month[/cmds]
|
985 | def test_project_config_dynamic_sampling_is_none(default_project):
default_project.update_option("sentry:dynamic_sampling", None)
with Feature({"organizations:server-side-sampling": True}):
cfg = get_project_config(default_project)
cfg = cfg.to_dict()
dynamic_sampling = get_path(cfg, "config", "dynamicSampling")
assert dynamic_sampling is None
@pytest.mark.django_db |
Tests test check inc-237 that dynamic sampling is None,
so it's pass when we have fix and fails when we dont
| 21 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_project_config_dynamic_sampling_is_none(default_project):
default_project.update_option("sentry:dynamic_sampling", None)
with Feature({"organizations:server-side-sampling": True}):
cfg = get_project_config(default_project)
cfg = cfg.to_dict()
dynamic_sampling = get_path(cfg, "config", "dynamicSampling")
assert dynamic_sampling is None
@pytest.mark.django_db
```
###Assistant :
Tests test check inc-237 that dynamic sampling is None,
so it's pass when we have fix and fails when we dont
|
986 | def seek(self, offset, whence=io.SEEK_SET):
self._check_can_seek()
return self._buffer.seek(offset, whence)
| Change the file position.
The new position is specified by offset, relative to the
position indicated by whence. Values for whence are:
0: start of stream (default); offset must not be negative
1: current stream position
2: end of stream; offset must not be positive
Returns the new file position.
Note that seeking is emulated, so depending on the parameters,
this operation may be extremely slow.
| 66 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def seek(self, offset, whence=io.SEEK_SET):
self._check_can_seek()
return self._buffer.seek(offset, whence)
```
###Assistant : Change the file position.
The new position is specified by offset, relative to the
position indicated by whence. Values for whence are:
0: start of stream (default); offset must not be negative
1: current stream position
2: end of stream; offset must not be positive
Returns the new file position.
Note that seeking is emulated, so depending on the parameters,
this operation may be extremely slow.
|
987 | def savepoint(self):
if not self._savepoint_allowed():
return
thread_ident = _thread.get_ident()
tid = str(thread_ident).replace("-", "")
self.savepoint_state += 1
sid = "s%s_x%d" % (tid, self.savepoint_state)
self.validate_thread_sharing()
self._savepoint(sid)
return sid
|
Create a savepoint inside the current transaction. Return an
identifier for the savepoint that will be used for the subsequent
rollback or commit. Do nothing if savepoints are not supported.
| 30 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def savepoint(self):
if not self._savepoint_allowed():
return
thread_ident = _thread.get_ident()
tid = str(thread_ident).replace("-", "")
self.savepoint_state += 1
sid = "s%s_x%d" % (tid, self.savepoint_state)
self.validate_thread_sharing()
self._savepoint(sid)
return sid
```
###Assistant :
Create a savepoint inside the current transaction. Return an
identifier for the savepoint that will be used for the subsequent
rollback or commit. Do nothing if savepoints are not supported.
|
988 | def get_unclaimed_expese_claims(filters):
cond = "1=1"
if filters.get("employee"):
cond = "ec.employee = %(employee)s"
return frappe.db.sql(
.format(
cond=cond
),
filters,
as_list=1,
)
|
select
ec.employee, ec.employee_name, ec.name, ec.total_sanctioned_amount, ec.total_amount_reimbursed,
sum(gle.credit_in_account_currency - gle.debit_in_account_currency) as outstanding_amt
from
`tabExpense Claim` ec, `tabGL Entry` gle
where
gle.against_voucher_type = "Expense Claim" and gle.against_voucher = ec.name
and gle.party is not null and ec.docstatus = 1 and ec.is_paid = 0 and {cond} group by ec.name
having
outstanding_amt > 0
| 49 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_unclaimed_expese_claims(filters):
cond = "1=1"
if filters.get("employee"):
cond = "ec.employee = %(employee)s"
return frappe.db.sql(
.format(
cond=cond
),
filters,
as_list=1,
)
```
###Assistant :
select
ec.employee, ec.employee_name, ec.name, ec.total_sanctioned_amount, ec.total_amount_reimbursed,
sum(gle.credit_in_account_currency - gle.debit_in_account_currency) as outstanding_amt
from
`tabExpense Claim` ec, `tabGL Entry` gle
where
gle.against_voucher_type = "Expense Claim" and gle.against_voucher = ec.name
and gle.party is not null and ec.docstatus = 1 and ec.is_paid = 0 and {cond} group by ec.name
having
outstanding_amt > 0
|
989 | def site_config_dir(self) -> str:
# XDG default for $XDG_CONFIG_DIRS only first, if multipath is False
path = os.environ.get("XDG_CONFIG_DIRS", "")
if not path.strip():
path = "/etc/xdg"
return self._with_multi_path(path)
|
:return: config directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>`
is enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS
path separator), e.g. ``/etc/xdg/$appname/$version``
| 34 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def site_config_dir(self) -> str:
# XDG default for $XDG_CONFIG_DIRS only first, if multipath is False
path = os.environ.get("XDG_CONFIG_DIRS", "")
if not path.strip():
path = "/etc/xdg"
return self._with_multi_path(path)
```
###Assistant :
:return: config directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>`
is enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS
path separator), e.g. ``/etc/xdg/$appname/$version``
|
990 | def fit_transform(self, X, y=None):
self._validate_params()
return self._fit_transform(X, compute_sources=True)
| Fit the model and recover the sources from X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
Not used, present for API consistency by convention.
Returns
-------
X_new : ndarray of shape (n_samples, n_components)
Estimated sources obtained by transforming the data with the
estimated unmixing matrix.
| 66 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def fit_transform(self, X, y=None):
self._validate_params()
return self._fit_transform(X, compute_sources=True)
```
###Assistant : Fit the model and recover the sources from X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : Ignored
Not used, present for API consistency by convention.
Returns
-------
X_new : ndarray of shape (n_samples, n_components)
Estimated sources obtained by transforming the data with the
estimated unmixing matrix.
|
991 | def __fetch_randomly_sampled_transactions(self, project, query, sample_size, query_time_range):
sampling_factor = self.__generate_transactions_sampling_factor(
project=project,
query=query,
sample_size=sample_size,
query_time_range=query_time_range,
)
builder = QueryBuilder(
Dataset.Discover,
params={
"start": query_time_range.start_time,
"end": query_time_range.end_time,
"project_id": [project.id],
"organization_id": project.organization.id,
},
query=f"{query} event.type:transaction",
selected_columns=[
"id",
"trace",
"random_number() as rand_num",
f"modulo(rand_num, {sampling_factor}) as modulo_num",
],
equations=[],
orderby=None,
auto_fields=True,
auto_aggregations=True,
use_aggregate_conditions=True,
functions_acl=["random_number", "modulo"],
limit=sample_size,
offset=0,
equation_config={"auto_add": False},
)
builder.add_conditions([Condition(lhs=Column("modulo_num"), op=Op.EQ, rhs=0)])
snuba_query = builder.get_snql_query().query
snuba_query = snuba_query.set_select(
snuba_query.select
+ [
Function(
"not",
[Function("has", [Column("contexts.key"), TRACE_PARENT_SPAN_CONTEXT])],
alias="is_root",
)
]
)
snuba_query = snuba_query.set_groupby(
snuba_query.groupby + [Column("modulo_num"), Column("contexts.key")]
)
data = raw_snql_query(
SnubaRequest(dataset=Dataset.Discover.value, app_id="default", query=snuba_query),
referrer=Referrer.DYNAMIC_SAMPLING_DISTRIBUTION_FETCH_TRANSACTIONS.value,
)["data"]
return data
|
Fetches a random sample of transactions of size `sample_size` in the last period
defined by `stats_period`. The random sample is fetched by generating a random number by
for every row, and then doing a modulo operation on it, and if that number is divisible
by the sampling factor then its kept, otherwise is discarded. This is an alternative to
sampling the query before applying the conditions. The goal here is to fetch the
transaction ids, their sample rates and their trace ids.
| 82 | 92 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __fetch_randomly_sampled_transactions(self, project, query, sample_size, query_time_range):
sampling_factor = self.__generate_transactions_sampling_factor(
project=project,
query=query,
sample_size=sample_size,
query_time_range=query_time_range,
)
builder = QueryBuilder(
Dataset.Discover,
params={
"start": query_time_range.start_time,
"end": query_time_range.end_time,
"project_id": [project.id],
"organization_id": project.organization.id,
},
query=f"{query} event.type:transaction",
selected_columns=[
"id",
"trace",
"random_number() as rand_num",
f"modulo(rand_num, {sampling_factor}) as modulo_num",
],
equations=[],
orderby=None,
auto_fields=True,
auto_aggregations=True,
use_aggregate_conditions=True,
functions_acl=["random_number", "modulo"],
limit=sample_size,
offset=0,
equation_config={"auto_add": False},
)
builder.add_conditions([Condition(lhs=Column("modulo_num"), op=Op.EQ, rhs=0)])
snuba_query = builder.get_snql_query().query
snuba_query = snuba_query.set_select(
snuba_query.select
+ [
Function(
"not",
[Function("has", [Column("contexts.key"), TRACE_PARENT_SPAN_CONTEXT])],
alias="is_root",
)
]
)
snuba_query = snuba_query.set_groupby(
snuba_query.groupby + [Column("modulo_num"), Column("contexts.key")]
)
data = raw_snql_query(
SnubaRequest(dataset=Dataset.Discover.value, app_id="default", query=snuba_query),
referrer=Referrer.DYNAMIC_SAMPLING_DISTRIBUTION_FETCH_TRANSACTIONS.value,
)["data"]
return data
```
###Assistant :
Fetches a random sample of transactions of size `sample_size` in the last period
defined by `stats_period`. The random sample is fetched by generating a random number by
for every row, and then doing a modulo operation on it, and if that number is divisible
by the sampling factor then its kept, otherwise is discarded. This is an alternative to
sampling the query before applying the conditions. The goal here is to fetch the
transaction ids, their sample rates and their trace ids.
|
992 | def item_query(doctype, txt, searchfield, start, page_len, filters, as_dict=False):
conditions = []
if isinstance(filters, str):
filters = json.loads(filters)
#Get searchfields from meta and use in Item Link field query
meta = frappe.get_meta("Item", cached=True)
searchfields = meta.get_search_fields()
# these are handled separately
ignored_search_fields = ("item_name", "description")
for ignored_field in ignored_search_fields:
if ignored_field in searchfields:
searchfields.remove(ignored_field)
columns = ''
extra_searchfields = [field for field in searchfields
if not field in ["name", "item_group", "description", "item_name"]]
if extra_searchfields:
columns = ", " + ", ".join(extra_searchfields)
searchfields = searchfields + [field for field in[searchfield or "name", "item_code", "item_group", "item_name"]
if not field in searchfields]
searchfields = " or ".join([field + " like %(txt)s" for field in searchfields])
if filters and isinstance(filters, dict):
if filters.get('customer') or filters.get('supplier'):
party = filters.get('customer') or filters.get('supplier')
item_rules_list = frappe.get_all('Party Specific Item',
filters = {'party': party}, fields = ['restrict_based_on', 'based_on_value'])
filters_dict = {}
for rule in item_rules_list:
if rule['restrict_based_on'] == 'Item':
rule['restrict_based_on'] = 'name'
filters_dict[rule.restrict_based_on] = []
for rule in item_rules_list:
filters_dict[rule.restrict_based_on].append(rule.based_on_value)
for filter in filters_dict:
filters[scrub(filter)] = ['in', filters_dict[filter]]
if filters.get('customer'):
del filters['customer']
else:
del filters['supplier']
else:
filters.pop('customer', None)
filters.pop('supplier', None)
description_cond = ''
if frappe.db.count('Item', cache=True) < 50000:
# scan description only if items are less than 50000
description_cond = 'or tabItem.description LIKE %(txt)s'
return frappe.db.sql(.format(
columns=columns,
scond=searchfields,
fcond=get_filters_cond(doctype, filters, conditions).replace('%', '%%'),
mcond=get_match_cond(doctype).replace('%', '%%'),
description_cond = description_cond),
{
"today": nowdate(),
"txt": "%%%s%%" % txt,
"_txt": txt.replace("%", ""),
"start": start,
"page_len": page_len
}, as_dict=as_dict)
@frappe.whitelist()
@frappe.validate_and_sanitize_search_inputs | select
tabItem.name, tabItem.item_name, tabItem.item_group,
if(length(tabItem.description) > 40, \
concat(substr(tabItem.description, 1, 40), "..."), description) as description
{columns}
from tabItem
where tabItem.docstatus < 2
and tabItem.disabled=0
and tabItem.has_variants=0
and (tabItem.end_of_life > %(today)s or ifnull(tabItem.end_of_life, '0000-00-00')='0000-00-00')
and ({scond} or tabItem.item_code IN (select parent from `tabItem Barcode` where barcode LIKE %(txt)s)
{description_cond})
{fcond} {mcond}
order by
if(locate(%(_txt)s, name), locate(%(_txt)s, name), 99999),
if(locate(%(_txt)s, item_name), locate(%(_txt)s, item_name), 99999),
idx desc,
name, item_name
limit %(start)s, %(page_len)s | 69 | 235 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def item_query(doctype, txt, searchfield, start, page_len, filters, as_dict=False):
conditions = []
if isinstance(filters, str):
filters = json.loads(filters)
#Get searchfields from meta and use in Item Link field query
meta = frappe.get_meta("Item", cached=True)
searchfields = meta.get_search_fields()
# these are handled separately
ignored_search_fields = ("item_name", "description")
for ignored_field in ignored_search_fields:
if ignored_field in searchfields:
searchfields.remove(ignored_field)
columns = ''
extra_searchfields = [field for field in searchfields
if not field in ["name", "item_group", "description", "item_name"]]
if extra_searchfields:
columns = ", " + ", ".join(extra_searchfields)
searchfields = searchfields + [field for field in[searchfield or "name", "item_code", "item_group", "item_name"]
if not field in searchfields]
searchfields = " or ".join([field + " like %(txt)s" for field in searchfields])
if filters and isinstance(filters, dict):
if filters.get('customer') or filters.get('supplier'):
party = filters.get('customer') or filters.get('supplier')
item_rules_list = frappe.get_all('Party Specific Item',
filters = {'party': party}, fields = ['restrict_based_on', 'based_on_value'])
filters_dict = {}
for rule in item_rules_list:
if rule['restrict_based_on'] == 'Item':
rule['restrict_based_on'] = 'name'
filters_dict[rule.restrict_based_on] = []
for rule in item_rules_list:
filters_dict[rule.restrict_based_on].append(rule.based_on_value)
for filter in filters_dict:
filters[scrub(filter)] = ['in', filters_dict[filter]]
if filters.get('customer'):
del filters['customer']
else:
del filters['supplier']
else:
filters.pop('customer', None)
filters.pop('supplier', None)
description_cond = ''
if frappe.db.count('Item', cache=True) < 50000:
# scan description only if items are less than 50000
description_cond = 'or tabItem.description LIKE %(txt)s'
return frappe.db.sql(.format(
columns=columns,
scond=searchfields,
fcond=get_filters_cond(doctype, filters, conditions).replace('%', '%%'),
mcond=get_match_cond(doctype).replace('%', '%%'),
description_cond = description_cond),
{
"today": nowdate(),
"txt": "%%%s%%" % txt,
"_txt": txt.replace("%", ""),
"start": start,
"page_len": page_len
}, as_dict=as_dict)
@frappe.whitelist()
@frappe.validate_and_sanitize_search_inputs
```
###Assistant : select
tabItem.name, tabItem.item_name, tabItem.item_group,
if(length(tabItem.description) > 40, \
concat(substr(tabItem.description, 1, 40), "..."), description) as description
{columns}
from tabItem
where tabItem.docstatus < 2
and tabItem.disabled=0
and tabItem.has_variants=0
and (tabItem.end_of_life > %(today)s or ifnull(tabItem.end_of_life, '0000-00-00')='0000-00-00')
and ({scond} or tabItem.item_code IN (select parent from `tabItem Barcode` where barcode LIKE %(txt)s)
{description_cond})
{fcond} {mcond}
order by
if(locate(%(_txt)s, name), locate(%(_txt)s, name), 99999),
if(locate(%(_txt)s, item_name), locate(%(_txt)s, item_name), 99999),
idx desc,
name, item_name
limit %(start)s, %(page_len)s
|
993 | def apply_and_enforce(*args, **kwargs):
func = kwargs.pop("_func")
expected_ndim = kwargs.pop("expected_ndim")
out = func(*args, **kwargs)
if getattr(out, "ndim", 0) != expected_ndim:
out_ndim = getattr(out, "ndim", 0)
raise ValueError(
f"Dimension mismatch: expected output of {func} "
f"to have dims = {expected_ndim}. Got {out_ndim} instead."
)
return out
| Apply a function, and enforce the output.ndim to match expected_ndim
Ensures the output has the expected dimensionality. | 17 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def apply_and_enforce(*args, **kwargs):
func = kwargs.pop("_func")
expected_ndim = kwargs.pop("expected_ndim")
out = func(*args, **kwargs)
if getattr(out, "ndim", 0) != expected_ndim:
out_ndim = getattr(out, "ndim", 0)
raise ValueError(
f"Dimension mismatch: expected output of {func} "
f"to have dims = {expected_ndim}. Got {out_ndim} instead."
)
return out
```
###Assistant : Apply a function, and enforce the output.ndim to match expected_ndim
Ensures the output has the expected dimensionality.
|
994 | def _sanitize_non_ordered(data) -> None:
if isinstance(data, (set, frozenset)):
raise TypeError(f"'{type(data).__name__}' type is unordered")
|
Raise only for unordered sets, e.g., not for dict_keys
| 9 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _sanitize_non_ordered(data) -> None:
if isinstance(data, (set, frozenset)):
raise TypeError(f"'{type(data).__name__}' type is unordered")
```
###Assistant :
Raise only for unordered sets, e.g., not for dict_keys
|
995 | def dis(x=None, *, file=None, depth=None):
if x is None:
distb(file=file)
return
# Extract functions from methods.
if hasattr(x, '__func__'):
x = x.__func__
# Extract compiled code objects from...
if hasattr(x, '__code__'): # ...a function, or
x = x.__code__
elif hasattr(x, 'gi_code'): #...a generator object, or
x = x.gi_code
elif hasattr(x, 'ag_code'): #...an asynchronous generator object, or
x = x.ag_code
elif hasattr(x, 'cr_code'): #...a coroutine.
x = x.cr_code
# Perform the disassembly.
if hasattr(x, '__dict__'): # Class or module
items = sorted(x.__dict__.items())
for name, x1 in items:
if isinstance(x1, _have_code):
print("Disassembly of %s:" % name, file=file)
try:
dis(x1, file=file, depth=depth)
except TypeError as msg:
print("Sorry:", msg, file=file)
print(file=file)
elif hasattr(x, 'co_code'): # Code object
_disassemble_recursive(x, file=file, depth=depth)
elif isinstance(x, (bytes, bytearray)): # Raw bytecode
_disassemble_bytes(x, file=file)
elif isinstance(x, str): # Source code
_disassemble_str(x, file=file, depth=depth)
else:
raise TypeError("don't know how to disassemble %s objects" %
type(x).__name__)
| Disassemble classes, methods, functions, and other compiled objects.
With no argument, disassemble the last traceback.
Compiled objects currently include generator objects, async generator
objects, and coroutine objects, all of which store their code object
in a special attribute.
| 38 | 145 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def dis(x=None, *, file=None, depth=None):
if x is None:
distb(file=file)
return
# Extract functions from methods.
if hasattr(x, '__func__'):
x = x.__func__
# Extract compiled code objects from...
if hasattr(x, '__code__'): # ...a function, or
x = x.__code__
elif hasattr(x, 'gi_code'): #...a generator object, or
x = x.gi_code
elif hasattr(x, 'ag_code'): #...an asynchronous generator object, or
x = x.ag_code
elif hasattr(x, 'cr_code'): #...a coroutine.
x = x.cr_code
# Perform the disassembly.
if hasattr(x, '__dict__'): # Class or module
items = sorted(x.__dict__.items())
for name, x1 in items:
if isinstance(x1, _have_code):
print("Disassembly of %s:" % name, file=file)
try:
dis(x1, file=file, depth=depth)
except TypeError as msg:
print("Sorry:", msg, file=file)
print(file=file)
elif hasattr(x, 'co_code'): # Code object
_disassemble_recursive(x, file=file, depth=depth)
elif isinstance(x, (bytes, bytearray)): # Raw bytecode
_disassemble_bytes(x, file=file)
elif isinstance(x, str): # Source code
_disassemble_str(x, file=file, depth=depth)
else:
raise TypeError("don't know how to disassemble %s objects" %
type(x).__name__)
```
###Assistant : Disassemble classes, methods, functions, and other compiled objects.
With no argument, disassemble the last traceback.
Compiled objects currently include generator objects, async generator
objects, and coroutine objects, all of which store their code object
in a special attribute.
|
996 | def bernoulli_poly(n, x=None, polys=False):
r
return named_poly(n, dup_bernoulli, QQ, "Bernoulli polynomial", (x,), polys)
| Generates the Bernoulli polynomial `\operatorname{B}_n(x)`.
`\operatorname{B}_n(x)` is the unique polynomial satisfying
.. math :: \int_{x}^{x+1} \operatorname{B}_n(t) \,dt = x^n.
Based on this, we have for nonnegative integer `s` and integer
`a` and `b`
.. math :: \sum_{k=a}^{b} k^s = \frac{\operatorname{B}_{s+1}(b+1) -
\operatorname{B}_{s+1}(a)}{s+1}
which is related to Jakob Bernoulli's original motivation for introducing
the Bernoulli numbers, the values of these polynomials at `x = 1`.
Examples
========
>>> from sympy import summation
>>> from sympy.abc import x
>>> from sympy.polys import bernoulli_poly
>>> bernoulli_poly(5, x)
x**5 - 5*x**4/2 + 5*x**3/3 - x/6
>>> def psum(p, a, b):
... return (bernoulli_poly(p+1,b+1) - bernoulli_poly(p+1,a)) / (p+1)
>>> psum(4, -6, 27)
3144337
>>> summation(x**4, (x, -6, 27))
3144337
>>> psum(1, 1, x).factor()
x*(x + 1)/2
>>> psum(2, 1, x).factor()
x*(x + 1)*(2*x + 1)/6
>>> psum(3, 1, x).factor()
x**2*(x + 1)**2/4
Parameters
==========
n : int
Degree of the polynomial.
x : optional
polys : bool, optional
If True, return a Poly, otherwise (default) return an expression.
References
==========
.. [1] https://en.wikipedia.org/wiki/Bernoulli_polynomials
| 168 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def bernoulli_poly(n, x=None, polys=False):
r
return named_poly(n, dup_bernoulli, QQ, "Bernoulli polynomial", (x,), polys)
```
###Assistant : Generates the Bernoulli polynomial `\operatorname{B}_n(x)`.
`\operatorname{B}_n(x)` is the unique polynomial satisfying
.. math :: \int_{x}^{x+1} \operatorname{B}_n(t) \,dt = x^n.
Based on this, we have for nonnegative integer `s` and integer
`a` and `b`
.. math :: \sum_{k=a}^{b} k^s = \frac{\operatorname{B}_{s+1}(b+1) -
\operatorname{B}_{s+1}(a)}{s+1}
which is related to Jakob Bernoulli's original motivation for introducing
the Bernoulli numbers, the values of these polynomials at `x = 1`.
Examples
========
>>> from sympy import summation
>>> from sympy.abc import x
>>> from sympy.polys import bernoulli_poly
>>> bernoulli_poly(5, x)
x**5 - 5*x**4/2 + 5*x**3/3 - x/6
>>> def psum(p, a, b):
... return (bernoulli_poly(p+1,b+1) - bernoulli_poly(p+1,a)) / (p+1)
>>> psum(4, -6, 27)
3144337
>>> summation(x**4, (x, -6, 27))
3144337
>>> psum(1, 1, x).factor()
x*(x + 1)/2
>>> psum(2, 1, x).factor()
x*(x + 1)*(2*x + 1)/6
>>> psum(3, 1, x).factor()
x**2*(x + 1)**2/4
Parameters
==========
n : int
Degree of the polynomial.
x : optional
polys : bool, optional
If True, return a Poly, otherwise (default) return an expression.
References
==========
.. [1] https://en.wikipedia.org/wiki/Bernoulli_polynomials
|
997 | def after_log(logger, log_level, sec_format="%0.3f"):
log_tpl = (
"Finished call to '%s' after " + str(sec_format) + "(s), "
"this was the %s time calling it."
)
| After call strategy that logs to some logger the finished attempt. | 11 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def after_log(logger, log_level, sec_format="%0.3f"):
log_tpl = (
"Finished call to '%s' after " + str(sec_format) + "(s), "
"this was the %s time calling it."
)
```
###Assistant : After call strategy that logs to some logger the finished attempt.
|
998 | def _compat_get_offset(meth):
sigs = [lambda self, width, height, xdescent, ydescent, renderer: locals(),
lambda self, bbox, renderer: locals()]
|
Decorator for the get_offset method of OffsetBox and subclasses, that
allows supporting both the new signature (self, bbox, renderer) and the old
signature (self, width, height, xdescent, ydescent, renderer).
| 29 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _compat_get_offset(meth):
sigs = [lambda self, width, height, xdescent, ydescent, renderer: locals(),
lambda self, bbox, renderer: locals()]
```
###Assistant :
Decorator for the get_offset method of OffsetBox and subclasses, that
allows supporting both the new signature (self, bbox, renderer) and the old
signature (self, width, height, xdescent, ydescent, renderer).
|
999 | def sixtofour(self):
if (self._ip >> 112) != 0x2002:
return None
return IPv4Address((self._ip >> 80) & 0xFFFFFFFF)
| Return the IPv4 6to4 embedded address.
Returns:
The IPv4 6to4-embedded address if present or None if the
address doesn't appear to contain a 6to4 embedded address.
| 26 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def sixtofour(self):
if (self._ip >> 112) != 0x2002:
return None
return IPv4Address((self._ip >> 80) & 0xFFFFFFFF)
```
###Assistant : Return the IPv4 6to4 embedded address.
Returns:
The IPv4 6to4-embedded address if present or None if the
address doesn't appear to contain a 6to4 embedded address.
|