id
stringlengths 40
40
| repo_name
stringlengths 5
110
| path
stringlengths 2
233
| content
stringlengths 0
1.03M
⌀ | size
int32 0
60M
⌀ | license
stringclasses 15
values |
---|---|---|---|---|---|
77db28ee2cad829ba2d1b95d11004b7f4abbf635 | dnanexus/dx-toolkit | src/R/dxR/man/workflowSetDetails.Rd | \name{workflowSetDetails}
\alias{workflowSetDetails}
\title{workflowSetDetails API wrapper}
\usage{
workflowSetDetails(objectID,
inputParams = emptyNamedList, jsonifyData = TRUE,
alwaysRetry = TRUE)
}
\arguments{
\item{objectID}{DNAnexus object ID}
\item{inputParams}{Either an R object that will be
converted into JSON using \code{RJSONIO::toJSON} to be
used as the input to the API call. If providing the JSON
string directly, you must set \code{jsonifyData} to
\code{FALSE}.}
\item{jsonifyData}{Whether to call \code{RJSONIO::toJSON}
on \code{inputParams} to create the JSON string or pass
through the value of \code{inputParams} directly.
(Default is \code{TRUE}.)}
\item{alwaysRetry}{Whether to always retry even when no
response is received from the API server}
}
\value{
If the API call is successful, the parsed JSON of the API
server response is returned (using
\code{RJSONIO::fromJSON}).
}
\description{
This function makes an API call to the
\code{/workflow-xxxx/setDetails} API method; it is a
simple wrapper around the \code{\link{dxHTTPRequest}}
function which makes POST HTTP requests to the API
server.
}
\references{
API spec documentation:
\url{https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method\%3A-\%2Fclass-xxxx\%2FsetDetails}
}
\seealso{
\code{\link{dxHTTPRequest}}
}
| 1,380 | apache-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | wch/r-source | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
77db28ee2cad829ba2d1b95d11004b7f4abbf635 | olegnev/dx-toolkit | src/R/dxR/man/workflowSetDetails.Rd | \name{workflowSetDetails}
\alias{workflowSetDetails}
\title{workflowSetDetails API wrapper}
\usage{
workflowSetDetails(objectID,
inputParams = emptyNamedList, jsonifyData = TRUE,
alwaysRetry = TRUE)
}
\arguments{
\item{objectID}{DNAnexus object ID}
\item{inputParams}{Either an R object that will be
converted into JSON using \code{RJSONIO::toJSON} to be
used as the input to the API call. If providing the JSON
string directly, you must set \code{jsonifyData} to
\code{FALSE}.}
\item{jsonifyData}{Whether to call \code{RJSONIO::toJSON}
on \code{inputParams} to create the JSON string or pass
through the value of \code{inputParams} directly.
(Default is \code{TRUE}.)}
\item{alwaysRetry}{Whether to always retry even when no
response is received from the API server}
}
\value{
If the API call is successful, the parsed JSON of the API
server response is returned (using
\code{RJSONIO::fromJSON}).
}
\description{
This function makes an API call to the
\code{/workflow-xxxx/setDetails} API method; it is a
simple wrapper around the \code{\link{dxHTTPRequest}}
function which makes POST HTTP requests to the API
server.
}
\references{
API spec documentation:
\url{https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method\%3A-\%2Fclass-xxxx\%2FsetDetails}
}
\seealso{
\code{\link{dxHTTPRequest}}
}
| 1,380 | apache-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | krlmlr/r-source | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | jeroenooms/r-source | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | SensePlatform/R | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
77db28ee2cad829ba2d1b95d11004b7f4abbf635 | johnwallace123/dx-toolkit | src/R/dxR/man/workflowSetDetails.Rd | \name{workflowSetDetails}
\alias{workflowSetDetails}
\title{workflowSetDetails API wrapper}
\usage{
workflowSetDetails(objectID,
inputParams = emptyNamedList, jsonifyData = TRUE,
alwaysRetry = TRUE)
}
\arguments{
\item{objectID}{DNAnexus object ID}
\item{inputParams}{Either an R object that will be
converted into JSON using \code{RJSONIO::toJSON} to be
used as the input to the API call. If providing the JSON
string directly, you must set \code{jsonifyData} to
\code{FALSE}.}
\item{jsonifyData}{Whether to call \code{RJSONIO::toJSON}
on \code{inputParams} to create the JSON string or pass
through the value of \code{inputParams} directly.
(Default is \code{TRUE}.)}
\item{alwaysRetry}{Whether to always retry even when no
response is received from the API server}
}
\value{
If the API call is successful, the parsed JSON of the API
server response is returned (using
\code{RJSONIO::fromJSON}).
}
\description{
This function makes an API call to the
\code{/workflow-xxxx/setDetails} API method; it is a
simple wrapper around the \code{\link{dxHTTPRequest}}
function which makes POST HTTP requests to the API
server.
}
\references{
API spec documentation:
\url{https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method\%3A-\%2Fclass-xxxx\%2FsetDetails}
}
\seealso{
\code{\link{dxHTTPRequest}}
}
| 1,380 | apache-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | reactorlabs/gnur | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | minux/R | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
77db28ee2cad829ba2d1b95d11004b7f4abbf635 | jhuttner/dx-toolkit | src/R/dxR/man/workflowSetDetails.Rd | \name{workflowSetDetails}
\alias{workflowSetDetails}
\title{workflowSetDetails API wrapper}
\usage{
workflowSetDetails(objectID,
inputParams = emptyNamedList, jsonifyData = TRUE,
alwaysRetry = TRUE)
}
\arguments{
\item{objectID}{DNAnexus object ID}
\item{inputParams}{Either an R object that will be
converted into JSON using \code{RJSONIO::toJSON} to be
used as the input to the API call. If providing the JSON
string directly, you must set \code{jsonifyData} to
\code{FALSE}.}
\item{jsonifyData}{Whether to call \code{RJSONIO::toJSON}
on \code{inputParams} to create the JSON string or pass
through the value of \code{inputParams} directly.
(Default is \code{TRUE}.)}
\item{alwaysRetry}{Whether to always retry even when no
response is received from the API server}
}
\value{
If the API call is successful, the parsed JSON of the API
server response is returned (using
\code{RJSONIO::fromJSON}).
}
\description{
This function makes an API call to the
\code{/workflow-xxxx/setDetails} API method; it is a
simple wrapper around the \code{\link{dxHTTPRequest}}
function which makes POST HTTP requests to the API
server.
}
\references{
API spec documentation:
\url{https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method\%3A-\%2Fclass-xxxx\%2FsetDetails}
}
\seealso{
\code{\link{dxHTTPRequest}}
}
| 1,380 | apache-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | aviralg/R-dyntrace | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
d0736619087bfe22f4da3cb5daaa64ef0fbf3331 | allr/timeR | src/library/stats/man/supsmu.Rd | % File src/library/stats/man/supsmu.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2016 R Core Team
% Distributed under GPL 2 or later
\name{supsmu}
\alias{supsmu}
\title{Friedman's SuperSmoother}
\description{
Smooth the (x, y) values by Friedman's \sQuote{super smoother}.
}
\usage{
supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE)
}
\arguments{
\item{x}{x values for smoothing}
\item{y}{y values for smoothing}
\item{wt}{case weights, by default all equal}
\item{span}{the fraction of the observations in the span of the running
lines smoother, or \code{"cv"} to choose this by leave-one-out
cross-validation.}
\item{periodic}{if \code{TRUE}, the x values are assumed to be in
\code{[0, 1]} and of period 1.}
\item{bass}{controls the smoothness of the fitted curve. Values of up
to 10 indicate increasing smoothness.}
\item{trace}{logical, if true, prints one line of info \dQuote{per
spar}, notably useful for \code{"cv"}.}
}
\details{
\code{supsmu} is a running lines smoother which chooses between three
spans for the lines. The running lines smoothers are symmetric, with
\code{k/2} data points each side of the predicted point, and values of
\code{k} as \code{0.5 * n}, \code{0.2 * n} and \code{0.05 * n}, where
\code{n} is the number of data points. If \code{span} is specified,
a single smoother with span \code{span * n} is used.
The best of the three smoothers is chosen by cross-validation for each
prediction. The best spans are then smoothed by a running lines
smoother and the final prediction chosen by linear interpolation.
The FORTRAN code says: \dQuote{For small samples (\code{n < 40}) or if
there are substantial serial correlations between observations close
in x-value, then a pre-specified fixed span smoother (\code{span >
0}) should be used. Reasonable span values are 0.2 to 0.4.}
Cases with non-finite values of \code{x}, \code{y} or \code{wt} are
dropped, with a warning.
}
\value{
A list with components
\item{x}{the input values in increasing order with duplicates removed.}
\item{y}{the corresponding y values on the fitted curve.}
}
\references{
Friedman, J. H. (1984)
SMART User's Guide.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}1.
Friedman, J. H. (1984)
A variable span scatterplot smoother.
Laboratory for Computational Statistics, Stanford University Technical
Report No.\sspace{}5.
}
\seealso{\code{\link{ppr}}}
\examples{
require(graphics)
with(cars, {
plot(speed, dist)
lines(supsmu(speed, dist))
lines(supsmu(speed, dist, bass = 7), lty = 2)
})
}
\keyword{smooth}
| 2,730 | gpl-2.0 |
fdd62f15d484769817b04991b09f76dbab8daab0 | ruthgrace/ALDEx2 | ALDEx2/man/getReads-method.Rd | \name{getReads}
\alias{getReads}
\alias{getReads,aldex.clr-method}
\title{getReads}
\description{
Returns the count table used as input for analysis, for an \code{aldex.clr} object.
}
\usage{
getReads(.object)
}
\arguments{
\item{.object}{A \code{aldex.clr} object containing the Monte Carlo Dirochlet instances derived from estimating the technical variance of the raw read count data, along with sample and feature information.
}
}
\details{
Returns the count table.
}
\value{
A data frame representing the count table used as input for analysis.
}
\seealso{
\code{aldex.clr}
}
\examples{
data(selex)
x <- aldex.clr(selex, mc.samples = 2, verbose = FALSE)
reads <- getReads(x)
}
| 702 | agpl-3.0 |
f644bc40812587483ef2307a7b9bdb641347e3a9 | dmlc/xgboost | R-package/man/xgb.load.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.load.R
\name{xgb.load}
\alias{xgb.load}
\title{Load xgboost model from binary file}
\usage{
xgb.load(modelfile)
}
\arguments{
\item{modelfile}{the name of the binary input file.}
}
\value{
An object of \code{xgb.Booster} class.
}
\description{
Load xgboost model from the binary model file.
}
\details{
The input file is expected to contain a model saved in an xgboost model format
using either \code{\link{xgb.save}} or \code{\link{cb.save.model}} in R, or using some
appropriate methods from other xgboost interfaces. E.g., a model trained in Python and
saved from there in xgboost format, could be loaded from R.
Note: a model saved as an R-object, has to be loaded using corresponding R-methods,
not \code{xgb.load}.
}
\examples{
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
bst <- xgboost(data = train$data, label = train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
xgb.save(bst, 'xgb.model')
bst <- xgb.load('xgb.model')
if (file.exists('xgb.model')) file.remove('xgb.model')
pred <- predict(bst, test$data)
}
\seealso{
\code{\link{xgb.save}}, \code{\link{xgb.Booster.complete}}.
}
| 1,309 | apache-2.0 |
7d1e52160e4c9dbc0f1057ff9fe53ef72411f49b | rwash/surveys | man/known_question.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/question_types.R
\name{known_question}
\alias{known_question}
\title{Known questions}
\usage{
known_question(q_name, levels, options = NULL, ordered = F)
}
\arguments{
\item{q_name}{The column name of the attention check question}
\item{levels}{What the different options should be called}
\item{options}{What the options look like in the original data. NULL == autodetect}
\item{ordered}{Should the resulting factor be ordered?}
}
\description{
Tells the question auto-detection system that a specific question is known with a predefined set of responses
}
| 640 | mit |
7a9cc07c9996cf72294addee8c9d2172476da3a5 | tianjialiu/HyAirshed-RPackage | man/oceanMask.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/oceanMask.R
\name{oceanMask}
\alias{oceanMask}
\title{Create an Ocean Mask}
\usage{
oceanMask(...)
}
\arguments{
\item{...}{see global arguments: ask_home}
}
\description{
Create an ocean mask from the countries.tif raster derived from the
Natural Earth shapefile ne_10m_admin_0_countries using QGIS.
}
\keyword{mask}
\keyword{ocean}
| 413 | mit |
7dea2e2082b022789c934c9901e26beac591fcb5 | msuefishlab/tdmsreader | man/TdmsFile.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tdms.R
\docType{class}
\name{TdmsFile}
\alias{TdmsFile}
\title{TdmsFile class}
\format{An \code{\link{R6Class}} generator object}
\usage{
TdmsFile
}
\description{
TdmsFile class
}
\keyword{data}
| 274 | lgpl-3.0 |
63517e5da48ef0ce3ceaccf6e9bf522110315ed2 | jbrzusto/motus-R-package | man/handleUnknownFiles.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/handleUnknownFiles.R
\name{handleUnknownFiles}
\alias{handleUnknownFiles}
\title{handle a set of files of unknown type}
\usage{
handleUnknownFiles(j)
}
\arguments{
\item{j}{the job}
}
\value{
TRUE;
}
\description{
Called by \code{\link{processServer}}. Files are
retained, and the sender is emailed a list of the files.
}
\seealso{
\code{\link{processServer}}
}
\author{
John Brzustowski \email{jbrzusto@REMOVE_THIS_PART_fastmail.fm}
}
| 515 | gpl-2.0 |
de3ed50cd5b304ca519b9826910a226d19ee277a | personlin/GMPEhaz | man/Sub.Inter.Com.008.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Subduction_Common_Form.R
\name{Sub.Inter.Com.008}
\alias{Sub.Inter.Com.008}
\title{GMPE function for Subduction interface common form 008 (2017)}
\usage{
Sub.Inter.Com.008(Mag, Rrup, Ztor, Prd)
}
\arguments{
\item{Mag}{Earthquake momnet magnitude, Numeric.}
\item{Rrup}{Rupture distance(km), Numeric.}
\item{Ztor}{Depth to the top of the finite rupture model (km).}
\item{Prd}{Period of spectral acceleration.}
}
\value{
A list will be return, including mag, Rrup, Ztor, pecT, lnY, sigma, iflag.
}
\description{
\code{Sub.Inter.Com.008} returns the ground-motion prediction with it sigma of Subduction interface Common form 008 GMPE.
}
\details{
Subduction interface common form 008
}
\examples{
Sub.Inter.Com.008(6, 20, 5, 0)
Sub.Inter.Com.008(7, 20, 2, 0)
}
| 842 | gpl-3.0 |
4bd5ef127a03cdff5e17649d7377b221ee1987f3 | amcdavid/RNASeqPipelineR | man/pipelineReport.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RNASeqPipelineR.R
\name{pipelineReport}
\alias{pipelineReport}
\title{Produce a report listing the tools and packages used by the pipeline.}
\usage{
pipelineReport()
}
\description{
Produce a report listing the tools and packages used by the pipeline.
}
\details{
Helps with reproducibility by producing a report listing the tools and packages used by the pipeline.
This needs to be run on the system that produced the results. Output to project directory OUTPUT/pipeline_report.md.
}
| 564 | gpl-3.0 |
417a7ac57d18323adb64a0722b44223b36989c87 | garborg/clean.dw | man/AND.Rd | \name{AND}
\alias{AND}
\title{Format conditions to be passed to a where clause.}
\usage{
AND(...)
}
\arguments{
\item{...}{A variable number of arguments, each an
\code{AND} or \code{OR} object or a named argument, the
name being a field name, the value: list(['=', '>', '<',
'like', or 'between'. optional '!' prepend], [values]) or
just a vector of values, in which case '=' is assumed.}
}
\value{
S3 object of class \code{AND}.
}
\description{
\code{AND} formats conditions to be passed to a where
clause.
}
| 522 | mit |
63517e5da48ef0ce3ceaccf6e9bf522110315ed2 | jbrzusto/motusServer | man/handleUnknownFiles.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/handleUnknownFiles.R
\name{handleUnknownFiles}
\alias{handleUnknownFiles}
\title{handle a set of files of unknown type}
\usage{
handleUnknownFiles(j)
}
\arguments{
\item{j}{the job}
}
\value{
TRUE;
}
\description{
Called by \code{\link{processServer}}. Files are
retained, and the sender is emailed a list of the files.
}
\seealso{
\code{\link{processServer}}
}
\author{
John Brzustowski \email{jbrzusto@REMOVE_THIS_PART_fastmail.fm}
}
| 515 | gpl-2.0 |
69cc900f43bfcb8ac30b5565f808b04b7e5bea1e | martynplummer/gemtc | gemtc/man/rank.probability.Rd | \encoding{utf8}
\name{rank.probability}
\alias{rank.probability}
\title{Calculating rank-probabilities}
\description{
Rank probabilities indicate the probability for each treatment to be best, second best, etc.
}
\details{
For each MCMC iteration, the treatments are ranked by their effect relative to an arbitrary baseline.
A frequency table is constructed from these rankings and normalized by the number of iterations to give the rank probabilities.
}
\usage{
rank.probability(result, preferredDirection=1)
}
\arguments{
\item{result}{Object of S3 class \code{mtc.result} to be used in creation of the rank probability table}
\item{preferredDirection}{Preferential direction of the outcome. Set 1 if higher values are preferred, -1 if lower values are preferred.}
}
\value{A matrix with the treatments as rows and the ranks as columns.
The matrix is given class \code{mtc.rank.probability}, for which \code{print} and \code{plot} are overriden. }
\author{Gert van Valkenhoef, Joël Kuiper}
\seealso{
\code{\link{relative.effect}}
}
\examples{
model <- mtc.model(smoking)
# To save computation time we load the samples instead of running the model
\dontrun{results <- mtc.run(model)}
results <- dget(system.file("extdata/luades-smoking.samples.gz", package="gemtc"))
ranks <- rank.probability(results)
print(ranks)
## Rank probability; preferred direction = 1
## [,1] [,2] [,3] [,4]
## A 0.000000 0.003000 0.105125 0.891875
## B 0.057875 0.175875 0.661500 0.104750
## C 0.228250 0.600500 0.170875 0.000375
## D 0.713875 0.220625 0.062500 0.003000
plot(ranks) # plot a cumulative rank plot
plot(ranks, beside=TRUE) # plot a 'rankogram'
}
| 1,665 | gpl-3.0 |
829767de5fc35df2829081943af39ffe6121b6a8 | dpattermann-usgs/repgen | man/dvhydrograph.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sig-dvhydrograph.R
\docType{methods}
\name{dvhydrograph}
\alias{dvhydrograph}
\alias{dvhydrograph,list-method}
\alias{dvhydrograph}
\title{DV Hydrograph Report}
\usage{
dvhydrograph(data, ...)
\S4method{dvhydrograph}{list}(data, ...)
}
\arguments{
\item{data}{Local data (as list), or URL.}
\item{...}{Additional parameters passed to GET or \code{authenticateUser}.}
}
\description{
DV Hydrograph Report
}
\examples{
library(gsplot)
library(jsonlite)
library(lubridate)
library(dplyr)
Sys.setenv(TZ = "UTC")
reportObject <- fromJSON(system.file('extdata','testsnippets', 'test-dvhydrograph.json',
package = 'repgen'))[['allStats']]
dvhydrograph(reportObject, 'Author Name')
}
| 794 | cc0-1.0 |
21b7a5b98e01b56632d25bb64d91f257e3fdabef | chepec/photoec | man/currentdensity.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/currentdensity.R
\name{currentdensity}
\alias{currentdensity}
\title{Calculate short-circuit current density}
\usage{
currentdensity(
wavelength,
photonflux.csum,
quantum.efficiency = 1,
bandgap = 1.23
)
}
\arguments{
\item{wavelength, }{in nanometer}
\item{photonflux.csum, }{s-1 m-2}
\item{quantum.efficiency, }{between 0 and 1}
\item{bandgap, }{in eV (equivalent to electrode potential)}
}
\value{
dataframe: energy (eV), wavelength (nm), and current density (A m-2)
}
\description{
Calculate the short-circuit current density (Jsc) from wavelength and photon flux.
}
\details{
If you use this function together with STH(), set the quantum.efficiency only once, either here or in \code{\link{STH}}.
}
| 793 | gpl-3.0 |
829767de5fc35df2829081943af39ffe6121b6a8 | thongsav-usgs/repgen | man/dvhydrograph.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sig-dvhydrograph.R
\docType{methods}
\name{dvhydrograph}
\alias{dvhydrograph}
\alias{dvhydrograph,list-method}
\alias{dvhydrograph}
\title{DV Hydrograph Report}
\usage{
dvhydrograph(data, ...)
\S4method{dvhydrograph}{list}(data, ...)
}
\arguments{
\item{data}{Local data (as list), or URL.}
\item{...}{Additional parameters passed to GET or \code{authenticateUser}.}
}
\description{
DV Hydrograph Report
}
\examples{
library(gsplot)
library(jsonlite)
library(lubridate)
library(dplyr)
Sys.setenv(TZ = "UTC")
reportObject <- fromJSON(system.file('extdata','testsnippets', 'test-dvhydrograph.json',
package = 'repgen'))[['allStats']]
dvhydrograph(reportObject, 'Author Name')
}
| 794 | cc0-1.0 |
829767de5fc35df2829081943af39ffe6121b6a8 | lindsaycarr/repgen | man/dvhydrograph.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sig-dvhydrograph.R
\docType{methods}
\name{dvhydrograph}
\alias{dvhydrograph}
\alias{dvhydrograph,list-method}
\alias{dvhydrograph}
\title{DV Hydrograph Report}
\usage{
dvhydrograph(data, ...)
\S4method{dvhydrograph}{list}(data, ...)
}
\arguments{
\item{data}{Local data (as list), or URL.}
\item{...}{Additional parameters passed to GET or \code{authenticateUser}.}
}
\description{
DV Hydrograph Report
}
\examples{
library(gsplot)
library(jsonlite)
library(lubridate)
library(dplyr)
Sys.setenv(TZ = "UTC")
reportObject <- fromJSON(system.file('extdata','testsnippets', 'test-dvhydrograph.json',
package = 'repgen'))[['allStats']]
dvhydrograph(reportObject, 'Author Name')
}
| 794 | cc0-1.0 |
1319c9302e7619d2ce81e4229a934e7ed58ac27d | acjackman/dbezr | man/parse_date_time.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/prep.R
\name{parse_date_time}
\alias{parse_date_time}
\title{Convert database DateTime string to an R date}
\usage{
parse_date_time(date_time)
}
\arguments{
\item{date_time}{date_time string passed to \code{parse_date_time}}
}
\description{
Convert database DateTime format to an R date format. Timezone comes from
\code{dbezr.timezone} option.
}
| 426 | mit |
0d064ad8a5cf1f2fb20ebdf37f704733d2e835a1 | Gcocca/svamap | man/lan.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/svamap.R
\docType{data}
\name{lan}
\alias{lan}
\title{Administrative units in Sweden (Lan)}
\source{
\url{http://www.scb.se/Grupp/Produkter_Tjanster/Verktyg/_Dokument/Shape-svenska.zip}
}
\description{
The dataset contains boundaries of lan in Sweden
}
\details{
Reprojected to WGS84
}
\keyword{datasets}
| 384 | gpl-3.0 |
a944aadc8b4bd31478560fe5838a6414f1a67571 | liquidSVM/liquidSVM | bindings/R/liquidSVM/man/liquidSVM-package.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/liquidSVM.R
\docType{package}
\name{liquidSVM-package}
\alias{liquidSVM-package}
\alias{liquidSVM}
\title{liquidSVM for R}
\description{
Support vector machines (SVMs) and related kernel-based learning algorithms are a well-known
class of machine learning algorithms, for non-parametric classification and regression.
\pkg{liquidSVM} is an implementation of SVMs whose key features are:
\itemize{
\item fully integrated hyper-parameter selection,
\item extreme speed on both small and large data sets,
\item full flexibility for experts, and
\item inclusion of a variety of different learning scenarios:
\itemize{
\item multi-class classification, ROC, and Neyman-Pearson learning, and
\item least-squares, quantile, and expectile regression
}
}
\ifelse{latex}{\out{\clearpage}}{}
Further information is available in the following vignettes:
\tabular{ll}{
\code{demo} \tab liquidSVM Demo (source, pdf)
\cr\code{documentation} \tab liquidSVM Documentation (source, pdf)\cr
}
}
\details{
In \pkg{liquidSVM} an application cycle is divided into a training phase, in which various SVM
models are created and validated, a selection phase, in which the SVM models that best
satisfy a certain criterion are selected, and a test phase, in which the selected models are
applied to test data. These three phases are based upon several components, which can be
freely combined using different components: solvers, hyper-parameter selection, working sets.
All of these can be configured (see \link{Configuration}) a
For instance multi-class classification with \eqn{k} labels has to be delegated to several binary classifications
called \emph{tasks} either using all-vs-all (\eqn{k(k-1)/2} tasks on the corresponding subsets) or
one-vs-all (\eqn{k} tasks on the full data set).
Every task can be split into \emph{cells} in order to handle larger data sets (for example \eqn{>10000} samples).
Now for every task and every cell, several \emph{folds} are created to enable cross-validated hyper-parameter selection.
The following learning scenarios can be used out of the box:
\describe{
\item{\code{\link{mcSVM}}}{binary and multi-class classification}
\item{\code{\link{lsSVM}}}{least squares regression}
\item{\code{\link{nplSVM}}}{Neyman-Pearson learning to classify with a specified rate on one type of error}
\item{\code{\link{rocSVM}}}{Receivert Operating Characteristic (ROC) curve to solve multiple weighted binary classification problems.}
\item{\code{\link{qtSVM}}}{quantile regression}
\item{\code{\link{exSVM}}}{expectile regression}
\item{\code{\link{bsSVM}}}{bootstrapping}
}
To calculate kernel matrices as used by the SVM we also provide for convenience the function
\code{\link{kern}}.
\pkg{liquidSVM} can benefit heavily from native compilation, hence we recommend to (re-)install it
using the information provided in the \href{../doc/documentation.html#Installation}{installation section}
of the documentation vignette.
}
\section{Known issues}{
Interruption (Ctrl-C) of running train/select/test phases is honored, but can leave
the C++ library in an inconsistent state, so that it is better to save your work and restart your
\R session.
\pkg{liquidSVM} is multi-threaded and is difficult to be multi-threaded externally, see
\href{../doc/documentation.html#Using external parallelization}{documentation}
}
\examples{
\dontrun{
set.seed(123)
## Multiclass classification
modelIris <- svm(Species ~ ., iris)
y <- predict(modelIris, iris)
## Least Squares
modelTrees <- svm(Height ~ Girth + Volume, trees)
y <- predict(modelTrees, trees)
plot(trees$Height, y)
test(modelTrees, trees)
## Quantile regression
modelTrees <- qtSVM(Height ~ Girth + Volume, trees, scale=TRUE)
y <- predict(modelTrees, trees)
## ROC curve
modelWarpbreaks <- rocSVM(wool ~ ., warpbreaks, scale=TRUE)
y <- test(modelWarpbreaks, warpbreaks)
plotROC(y,warpbreaks$wool)
}
}
\references{
\url{http://www.isa.uni-stuttgart.de}
}
\seealso{
\code{\link{init.liquidSVM}}, \code{\link{trainSVMs}}, \code{\link{predict.liquidSVM}}, \code{\link{clean.liquidSVM}}, and \code{\link{test.liquidSVM}}, \link{Configuration};
}
\author{
Ingo Steinwart \email{[email protected]},
Philipp Thomann \email{[email protected]}
Maintainer: Philipp Thomann \email{[email protected]}
}
\keyword{SVM}
| 4,421 | agpl-3.0 |
51cf85b40500262e97a171d26d9555bf5f5aa05a | MazamaScience/PWFSLSmoke | man/wrcc_ESAMQualityControl.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wrcc_ESAMQualityControl.R
\name{wrcc_ESAMQualityControl}
\alias{wrcc_ESAMQualityControl}
\title{Apply Quality Control to raw WRCC E-Sampler tibble}
\usage{
wrcc_ESAMQualityControl(
tbl,
valid_Longitude = c(-180, 180),
valid_Latitude = c(-90, 90),
remove_Lon_zero = TRUE,
remove_Lat_zero = TRUE,
valid_Flow = c(1.999, 2.001),
valid_AT = c(-Inf, 150),
valid_RHi = c(-Inf, 55),
valid_Conc = c(-Inf, 5000),
flagAndKeep = FALSE
)
}
\arguments{
\item{tbl}{single site tibble created by \code{wrcc_parseData()}}
\item{valid_Longitude}{range of valid Longitude values}
\item{valid_Latitude}{range of valid Latitude values}
\item{remove_Lon_zero}{flag to remove rows where Longitude == 0}
\item{remove_Lat_zero}{flag to remove rows where Latitude == 0}
\item{valid_Flow}{range of valid Flow values}
\item{valid_AT}{range of valid AT values}
\item{valid_RHi}{range of valid RHi values}
\item{valid_Conc}{range of valid ConcHr values}
\item{flagAndKeep}{flag, rather than remove, bad data during the QC process}
}
\value{
Cleaned up tibble of WRCC monitor data.
}
\description{
Perform various QC measures on WRCC EBAM data.
The any numeric values matching the following are converted to \code{NA}
\itemize{
\item{\code{x < -900}}
\item{\code{x == -9.9899}}
\item{\code{x == 99999}}
}
The following columns of data are tested against valid ranges:
\itemize{
\item{\code{Flow}}
\item{\code{AT}}
\item{\code{RHi}}
\item{\code{ConcHr}}
}
A \code{POSIXct datetime} column (UTC) is also added based on \code{DateTime}.
}
\seealso{
\code{\link{wrcc_qualityControl}}
}
\keyword{WRCC}
| 1,676 | gpl-3.0 |
b9e82fffee4f9088689082aeae175f1fe7d6f509 | OHI-Science/ohicore | man/CalculateResilienceAll.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/CalculateResilienceAll.R
\name{CalculateResilienceAll}
\alias{CalculateResilienceAll}
\title{Calculate all the resilience score for each (sub)goal.}
\usage{
CalculateResilienceAll(layers, conf)
}
\arguments{
\item{layers}{object \code{\link{Layers}}}
\item{conf}{object \code{\link{Conf}}}
}
\value{
data.frame containing columns 'region_id' and per subgoal resilience score
}
\description{
Calculate all the resilience score for each (sub)goal.
}
| 527 | gpl-2.0 |
d5fda0b098bd1296e74b2333f88dbf2486818d52 | mikldk/popr | man/get_pedigree_as_graph.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RcppExports.R
\name{get_pedigree_as_graph}
\alias{get_pedigree_as_graph}
\title{Get pedigree information as graph (mainly intended for plotting)}
\usage{
get_pedigree_as_graph(ped)
}
\description{
Get pedigree information as graph (mainly intended for plotting)
}
| 342 | mit |
a852a3906fe670b68b0042f4363740f526cd8554 | GabrieleRovigatti/prodest | prodest/man/prodestWRDG.Rd | \name{prodestWRDG}
\alias{prodestWRDG}
\title{
Estimate productivity - Wooldridge method
}
\description{
The \code{prodestWRDG()} function accepts at least 6 objects (id, time, output, free, state and proxy variables), and returns a \code{prod} object of class \code{S4} with three elements: (i) a list of model-related objects, (ii) a list with the data used in the estimation and estimated vectors of first-stage residuals, and (iii) a list with the estimated parameters and their bootstrapped standard errors.
}
\usage{
prodestWRDG(Y, fX, sX, pX, idvar, timevar, R = 20, G = 3, orth = F, cX = NULL, seed = 123456,
tol = 1e-100, theta0 = NULL, cluster = NULL)
}
\arguments{
\item{Y }{
the vector of value added log output.}
%
\item{fX }{
the vector/matrix/dataframe of log free variables.}
%
\item{sX }{
the vector/matrix/dataframe of log state variables.}
%
\item{pX }{
the vector/matrix/dataframe of log proxy variables.}
%
\item{cX }{
the vector/matrix/dataframe of control variables. By default \code{cX= NULL}.}
%
\item{idvar }{
the vector/matrix/dataframe identifying individual panels.}
%
\item{timevar}{
the vector/matrix/dataframe identifying time.}
%
\item{R }{
the number of block bootstrap repetitions to be performed in the standard error estimation. By default \code{R = 20}.}
%
\item{G }{
the degree of the polynomial for productivity in sX and pX. By default, \code{G = 3}}
%
\item{orth }{
a Boolean that determines whether first-stage polynomial should be orthogonal or raw. By default, \code{orth = F}.
It is recommended to set orth to T if degree of polynomial is high.}
%
\item{theta0 }{
a vector with the second stage optimization starting points. By default \code{theta0 = NULL} and the optimization is run starting from the first stage estimated parameters + \eqn{N(\mu=0,\sigma=0.01)} noise.}
%
\item{cluster}{
an object of class \code{"SOCKcluster"} or \code{"cluster"}. By default \code{cluster = NULL}.}
%
\item{seed}{
seed set when the routine starts. By default \code{seed = 123456}.}
%
\item{tol}{
optimizer tolerance. By default \code{tol = 1e-100}.}
%
}
%%%%%%%%% DETAILS %%%%%%%%%%%
\details{
Consider a Cobb-Douglas production technology for firm \eqn{i} at time \eqn{t}
\itemize{
\item \eqn{y_{it} = \alpha + w_{it}\beta + k_{it}\gamma + \omega_{it} + \epsilon_{it}}
}
where \eqn{y_{it}} is the (log) output, w_{it} a 1xJ vector of (log) free variables, k_{it} is a 1xK vector of state variables and \eqn{\epsilon_{it}} is a normally distributed idiosyncratic error term.
The unobserved technical efficiency parameter \eqn{\omega_{it}} evolves according to a first-order Markov process:
\itemize{
\item \eqn{\omega_{it} = E(\omega_{it} | \omega_{it-1}) + u_{it} = g(\omega_{it-1}) + u_{it}}
}
and \eqn{u_{it}} is a random shock component assumed to be uncorrelated with the technical efficiency, the state variables in \eqn{k_{it}} and the lagged free variables \eqn{w_{it-1}}.
Wooldridge method allows to jointly estimate OP/LP two stages jointly in a system of two equations. It relies on the following set of assumptions:
\itemize{
\item a) \eqn{\omega_{it} = g(x_{it} , p_{it})}: productivity is an unknown function \eqn{g(.)} of state and a proxy variables;
\item b) \eqn{E(\omega_{it} | \omega_{it-1)}=f[\omega_{it-1}]}, productivity is an unknown function \eqn{f[.]} of lagged productivity, \eqn{\omega_{it-1}}.
}
Under the above set of assumptions, It is possible to construct a system gmm using the vector of residuals from
\itemize{
\item \eqn{r_{1it} = y_{it} - alpha - w_{it}\beta - x_{it}\gamma - g(x_{it} , p_{it}) }
\item \eqn{r_{2it} = y_{it} - alpha - w_{it}\beta - x_{it}\gamma - f[g(x_{it-1} , p_{it-1})]}
}
where the unknown function \eqn{f(.)} is approximeted by a n-th order polynomial and \eqn{g(x_{it} , m_{it}) = \lambda_0 + c(x_{it} , m_{it})\lambda}. In particular, \eqn{g(x_{it} , m_{it})} is a linear combination of functions in \eqn{(x_{it} , m_{it})}
and \eqn{c_{it}} are the addends of this linear combination. The residuals eqn{r_{it}} are used to set the moment conditions
\itemize{
\item \eqn{E(Z_{it}*r_{it}) =0}
}
with the following set of instruments:
\itemize{
\item \eqn{Z1_{it} = (1, w_{it}, x_{it}, c_{it})}
\item \eqn{Z2_{it} = (w_{it-1}, c_{it}, c_{it})}
}
}
%%%%%%%%% VALUE %%%%%%%%%%%
\value{
The output of the function \code{prodestWRDG} is a member of the \code{S3} class \pkg{prod}. More precisely, is a list (of length 3) containing the following elements:
\code{Model}, a list containing:
\itemize{
\item \code{method:} a string describing the method ('WRDG').
\item \code{elapsed.time:} time elapsed during the estimation.
\item \code{seed:} the seed set at the beginning of the estimation.
\item \code{opt.outcome:} optimization outcome.
}
\code{Data}, a list containing:
\itemize{
\item \code{Y:} the vector of value added log output.
\item \code{free:} the vector/matrix/dataframe of log free variables.
\item \code{state:} the vector/matrix/dataframe of log state variables.
\item \code{proxy:} the vector/matrix/dataframe of log proxy variables.
\item \code{control:} the vector/matrix/dataframe of log control variables.
\item \code{idvar:} the vector/matrix/dataframe identifying individual panels.
\item \code{timevar:} the vector/matrix/dataframe identifying time.
}
\code{Estimates}, a list containing:
\itemize{
\item \code{pars:} the vector of estimated coefficients.
\item \code{std.errors:} the vector of bootstrapped standard errors.
}
Members of class \code{prod} have an \code{omega} method returning a numeric object with the estimated productivity - that is: \eqn{\omega_{it} = y_{it} - (\alpha + w_{it}\beta + k_{it}\gamma)}.
\code{FSres} method returns a numeric object with the residuals of the first stage regression, while \code{summary}, \code{show} and \code{coef} methods are implemented and work as usual.
}
%%%%%%%%% NOTE %%%%%%%%%%%
% \note{
% \code{prodestWRDG()} proceeds with the estimation
% }
%%%%%%%%% AUTHOR %%%%%%%%%%%
\author{
Gabriele Rovigatti
}
%%%%%%%%% REFERENCES %%%%%%%%%%%
\references{
Wooldridge, J M (2009).
"On estimating firm-level production functions using proxy variables to control for unobservables."
Economics Letters, 104, 112-114.
}
%%%%%%%%% EXAMPLES %%%%%%%%%%%
\examples{
data("chilean")
# we fit a model with two free (skilled and unskilled), one state (capital) and one proxy variable (electricity)
WRDG.fit <- prodestWRDG(d$Y, fX = cbind(d$fX1, d$fX2), d$sX, d$pX, d$idvar, d$timevar)
# show results
WRDG.fit
\dontrun{
# estimate a panel dataset - DGP1, various measurement errors - and run the estimation
sim <- panelSim()
WRDG.sim1 <- prodestWRDG(sim$Y, sim$fX, sim$sX, sim$pX1, sim$idvar, sim$timevar)
WRDG.sim2 <- prodestWRDG(sim$Y, sim$fX, sim$sX, sim$pX2, sim$idvar, sim$timevar)
WRDG.sim3 <- prodestWRDG(sim$Y, sim$fX, sim$sX, sim$pX3, sim$idvar, sim$timevar)
WRDG.sim4 <- prodestWRDG(sim$Y, sim$fX, sim$sX, sim$pX4, sim$idvar, sim$timevar)
# show results in .tex tabular format
printProd(list(WRDG.sim1, WRDG.sim2, WRDG.sim3, WRDG.sim4), parnames = c('Free','State'))
}
}
| 7,729 | gpl-3.0 |
99066ee42c18326e545b0b06877c5eda86fb1fbf | alexstorer/GridR | man/grid.cogMyproxy.Rd | \name{grid.cogMyproxy}
\alias{grid.cogMyproxy}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{ grid.cogMyproxy}
\description{
renews the myproxy certificate, if COG is installed
}
\usage{
grid.cogMyproxy()
}
\author{ Malte Lohmeyer}
\keyword{ programming }
| 282 | gpl-2.0 |
388c15d13c27cb71de51c2c50c898aa46180293a | RGLab/RNASeqPipelineR | man/buildReference.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RNASeqPipelineR.R
\name{buildReference}
\alias{buildReference}
\title{Build a reference genome}
\usage{
buildReference(path = NULL, gtf_file = NULL, fasta_file = NULL,
isoformsFile = NULL, name = NULL, doSTAR = TRUE, starPath = NULL,
threads = 1)
}
\arguments{
\item{path}{\code{character} specifying an \emph{absolute path} path to the Reference_Genome directory.}
\item{gtf_file}{\code{character} the name of the gtf file. Empty by default. If specified the function will look for the named file in the 'path' directory}
\item{fasta_file}{\code{character} the name of the fasta file, must be specified and located in the 'path' directory}
\item{isoformsFile}{\code{character} the name if the known isoforms file (optional). If specified it must be located in the 'path' file.}
\item{name}{\code{character} the prefix of the genome output.}
\item{doSTAR}{\code{logical} logical showing whether to build genome with STAR index.}
\item{starPath}{\code{character} full path to the 'star' executable.}
\item{threads}{\code{integer} number of threads to use}
}
\description{
Builds a reference genome at `path/Reference_Genome/`
}
\details{
You must specify the Utils path if it is not already defined, and have your genome in a folder titled
`Reference_Genome`. This function will construct the reference genome using RSEM tools.
The command line is the default shown in the documentation.
`rsem-prepare-reference --gtf gtf_file --transcript-to-gene-map knownIsoforms.txt --bowtie2 fasta_file name`
or if doSTAR=TRUE
`rsem-prepare-reference --gtf gtf_file --transcript-to-gene-map knownIsoforms.txt --star --star-path starPath fasta_file name`
If the gtf_file is not give, then the transcript-to-gene-map option is not used either. A fasta_file and a name must be provided.
}
| 1,864 | gpl-3.0 |
ea9da3362bcd6939cbdabfca7957c964ba4683ca | kalden/spartan | man/efast_get_overall_medians.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/efast_analysis.R
\name{efast_get_overall_medians}
\alias{efast_get_overall_medians}
\title{Calculates the summary stats for each parameter set (median of any
replicates)}
\usage{
efast_get_overall_medians(FILEPATH, NUMCURVES, PARAMETERS, NUMSAMPLES,
MEASURES, TIMEPOINTS = NULL, TIMEPOINTSCALE = NULL,
current_time = NULL, check_done = FALSE)
}
\arguments{
\item{FILEPATH}{Directory where the simulation runs can be found, in folders
or in CSV file format}
\item{NUMCURVES}{The number of 'resamples' to perform (see eFAST
documentation) - recommend using at least 3}
\item{PARAMETERS}{Array containing the names of the parameters of which
parameter samples will be generated}
\item{NUMSAMPLES}{The number of parameter subsets that were generated in
the eFAST design}
\item{MEASURES}{Array containing the names of the output measures which
are used to analyse the simulation}
\item{TIMEPOINTS}{Implemented so this method can be used when analysing
multiple simulation timepoints. If only analysing one timepoint, this
should be set to NULL. If not, this should be an array of timepoints,
e.g. c(12,36,48,60)}
\item{TIMEPOINTSCALE}{Sets the scale of the timepoints being analysed,
e.g. "Hours"}
\item{current_time}{If multiple timepoints, the current timepoint being processed}
\item{check_done}{If multiple timepoints, whether the input has been checked}
}
\description{
This method produces a summary of the results for a particular resampling
curve. This shows, for each parameter of interest, the median of each
simulation output measure for each of the 65 parameter value sets generated.
Here's an example. We examine resampling curve 1, and firstly examine
parameter 1. For this parameter of interest, a number of different parameter
value sets were generated from the frequency curves (lets say 65), thus we
have 65 different sets of simulation results. The method
\code{efast_generate_medians_for_all_parameter_subsets} produced a summary
showing the median of each output measure for each run. Now, this method
calculates the median of these medians, for each output measure, and stores
these in the summary. Thus, for each parameter of interest, the medians of
each of the 65 sets of results are stored. The next parameter is then
examined, until all have been analysed. This produces a snapshot showing
the median simulation output for all parameter value sets generated for
the first resample curve. These are stored with the file name
Curve[Number]_Results_Summary in the directory specified in FILEPATH.
Again this can be done recursively for a number of timepoints if required.
}
| 2,684 | gpl-2.0 |
99066ee42c18326e545b0b06877c5eda86fb1fbf | osg-bosco/GridR | man/grid.cogMyproxy.Rd | \name{grid.cogMyproxy}
\alias{grid.cogMyproxy}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{ grid.cogMyproxy}
\description{
renews the myproxy certificate, if COG is installed
}
\usage{
grid.cogMyproxy()
}
\author{ Malte Lohmeyer}
\keyword{ programming }
| 282 | gpl-2.0 |
99066ee42c18326e545b0b06877c5eda86fb1fbf | cran/GridR | man/grid.cogMyproxy.Rd | \name{grid.cogMyproxy}
\alias{grid.cogMyproxy}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{ grid.cogMyproxy}
\description{
renews the myproxy certificate, if COG is installed
}
\usage{
grid.cogMyproxy()
}
\author{ Malte Lohmeyer}
\keyword{ programming }
| 282 | gpl-2.0 |
e82a12686722c4f3b49f98d7b0d59201f8e1d5f9 | UMMS-Biocore/debrowser | man/cutOffSelectionUI.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/deprogs.R
\name{cutOffSelectionUI}
\alias{cutOffSelectionUI}
\title{cutOffSelectionUI}
\usage{
cutOffSelectionUI(id)
}
\arguments{
\item{id, }{namespace id}
}
\value{
returns the left menu according to the selected tab;
}
\description{
Gathers the cut off selection for DE analysis
}
\note{
\code{cutOffSelectionUI}
}
\examples{
x <- cutOffSelectionUI("cutoff")
}
| 446 | gpl-3.0 |
5ccae83406aceb885db03cec54872b2e01852205 | VNyaga/CopulaDTA | man/fit.cdtamodel.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fit.R
\name{fit.cdtamodel}
\alias{fit.cdtamodel}
\title{Fit copula based bivariate beta-binomial distribution to diagnostic data.}
\usage{
fit.cdtamodel(cdtamodel, data, SID, cores = 3, chains = 3, iter = 6000,
warmup = 1000, thin = 10, ...)
}
\arguments{
\item{cdtamodel}{An object of cdtamodel class from \link{cdtamodel}.}
\item{data}{A data-frame with no missing values containg TP, TN, FP, FN, 'SID' and co-varaiables(if necessary).}
\item{SID}{A string indicating the name of the column with the study identifier.}
\item{cores}{A positive numeric values specifying the number of cores to use to execute parallel sampling. When the hardware has more at least 4 cores,
the default is 3 cores and otherwise 1 core.}
\item{chains}{A positive numeric value specifying the number of chains, default is 3.}
\item{iter}{A positive numeric value specifying the number of iterations per chain. The default is 6000.}
\item{warmup}{A positive numeric value (<iter) specifying the number of iterations to be discarded(burn-in/warm-up). The default is 1000.}
\item{thin}{A positive numeric value specifying the interval in which the samples are stored. The default is 10.}
\item{...}{Other optional parameters as specified in \link[rstan]{stan}.}
}
\value{
An object of cdtafit class.
}
\description{
Fit copula based bivariate beta-binomial distribution to diagnostic data.
}
\examples{
\dontrun{
fit1 <- fit(model1,
SID='ID',
data=telomerase,
iter=2000,
warmup=1000,
thin=1,
seed=3)
fit2 <- fit(model2,
SID='StudyID',
data=ascus,
iter=2000,
warmup=1000,
thin=1,
seed=3)
}
}
\references{
{Agresti A (2002). Categorical Data Analysis. John Wiley & Sons, Inc.}
{Clayton DG (1978). A model for Association in Bivariate Life Tables and its Application in
Epidemiological Studies of Familial Tendency in Chronic Disease Incidence. Biometrika,65(1), 141-151.}
{Frank MJ (1979). On The Simultaneous Associativity of F(x, y) and x + y - F(x, y). Aequationes Mathematicae, pp. 194-226.}
{Farlie DGJ (1960). The Performance of Some Correlation Coefficients for a General Bivariate
Distribution. Biometrika, 47, 307-323.}
{Gumbel EJ (1960). Bivariate Exponential Distributions. Journal of the American Statistical Association, 55, 698-707.}
{Meyer C (2013). The Bivariate Normal Copula. Communications in Statistics - Theory and Methods, 42(13), 2402-2422.}
{Morgenstern D (1956). Einfache Beispiele Zweidimensionaler Verteilungen. Mitteilungsblatt furMathematische Statistik, 8, 23 - 235.}
{Sklar A (1959). Fonctions de Repartition a n Dimensions et Leurs Marges. Publications de l'Institut de Statistique de L'Universite de Paris, 8, 229-231.}
}
\author{
Victoria N Nyaga <[email protected]>
}
| 2,977 | gpl-3.0 |
32b689d0d10c5be848cd764fad58c10074900e13 | ahopki14/immunoSeqR | man/exp_clone.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/exp_clone.R
\name{exp_clone}
\alias{exp_clone}
\title{exp_clone}
\usage{
exp_clone(x, y)
}
\arguments{
\item{x, y}{Vectors representing the receptor counts from two samples}
}
\value{
A vector of fisher test p values for each clone, the same length as x and y
}
\description{
Calculates p values for each clone using a fisher test.
}
\author{
Alexander Hopkins
}
| 441 | gpl-3.0 |
85a08192d243b0e944f51d7006547b6f447306d3 | FertigLab/CoGAPS | man/GIST.data_frame.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Package.R
\docType{data}
\name{GIST.data_frame}
\alias{GIST.data_frame}
\title{GIST gene expression data from Ochs et al. (2009)}
\description{
GIST gene expression data from Ochs et al. (2009)
}
| 274 | bsd-3-clause |
c812827c0b9fc7dcad48b3f472f45e311940d3ee | debangs/RSAGA | man/rsaga.fill.sinks.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RSAGA-modules.R
\name{rsaga.fill.sinks}
\alias{rsaga.fill.sinks}
\title{Fill Sinks}
\usage{
rsaga.fill.sinks(in.dem, out.dem, method = "planchon.darboux.2001",
out.flowdir, out.wshed, minslope, ...)
}
\arguments{
\item{in.dem}{Input: digital elevation model (DEM) as SAGA grid file (default extension: \code{.sgrd}).}
\item{out.dem}{Output: filled, depression-free DEM (SAGA grid file). Existing files will be overwritten!}
\item{method}{The depression filling algorithm to be used (character). One of \code{"planchon.darboux.2001"} (default), \code{"wang.liu.2006"}, or \code{"xxl.wang.liu.2006"}.}
\item{out.flowdir}{(only for \code{"wang.liu.2001"}): Optional output grid file for computed flow directions (see Notes).}
\item{out.wshed}{(only for \code{"wang.liu.2001"}): Optional output grid file for watershed basins.}
\item{minslope}{Minimum slope angle (in degree) preserved between adjacent grid cells (default value of \code{0.01} only for \code{method="planchon.darboux.2001"}, otherwise no default).}
\item{...}{Optional arguments to be passed to \code{\link{rsaga.geoprocessor}}, including the \code{env} RSAGA geoprocessing environment.}
}
\value{
The type of object returned depends on the \code{intern} argument passed to the \code{\link{rsaga.geoprocessor}}. For \code{intern=FALSE} it is a numerical error code (0: success), or otherwise (default) a character vector with the module's console output.
The function writes SAGA grid files containing of the depression-free preprocessed DEM, and optionally the flow directions and watershed basins.
}
\description{
Several methods for filling closed depressions in digital elevation models that would affect hydrological modeling.
}
\details{
This function bundles three SAGA modules for filling sinks using three different algorithms (\code{method} argument).
\code{"planchon.darboux.2001"}: The algorithm of Planchon and Darboux (2001) consists of increasing the elevation of pixels in closed depressions until the sink disappears and a mininum slope angle of \code{minslope} (default: \code{0.01} degree) is established.
\code{"wang.liu.2006"}: This module uses an algorithm proposed by Wang and Liu (2006) to identify and fill surface depressions in DEMs. The method was enhanced to allow the creation of hydrologically sound elevation models, i.e. not only to fill the depressions but also to preserve a downward slope along the flow path. If desired, this is accomplished by preserving a minimum slope gradient (and thus elevation difference) between cells. This is the fully featured version of the module creating a depression-free DEM, a flow path grid and a grid with watershed basins. If you encounter problems processing large data sets (e.g. LIDAR data) with this module try the basic version (\code{xxl.wang.lui.2006}).
\code{"xxl.wang.liu.2006"}: This modified algorithm after Wang and Liu (2006) is designed to work on large data sets.
}
\note{
The flow directions are coded as 0 = north, 1 = northeast, 2 = east, ..., 7 = northwest.
If \code{minslope=0}, depressions will only be filled until a horizontal surface is established, which may not be helpful for hydrological modeling.
}
\author{
Alexander Brenning (R interface), Volker Wichmann (SAGA module)
}
\references{
Planchon, O., and F. Darboux (2001): A fast, simple and versatile algorithm to fill the depressions of digital elevation models. Catena 46: 159-176.
Wang, L. & H. Liu (2006): An efficient method for identifying and filling surface depressions in digital elevation models for hydrologic analysis and modelling. International Journal of Geographical Information Science, Vol. 20, No. 2: 193-213.
}
\seealso{
\code{\link{rsaga.sink.removal}}, \code{\link{rsaga.sink.route}}.
}
\keyword{interface}
\keyword{spatial}
| 3,861 | gpl-2.0 |
537f8f98a17b9cb3afbb475847f0b95df2dc7c37 | RMHogervorst/coffeegeeks | man/coffeegeeks.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/coffeegeeks.R
\docType{package}
\name{coffeegeeks}
\alias{coffeegeeks}
\alias{coffeegeeks-package}
\title{Coffeegeeks}
\description{
A package full of coffee goodness. including but not
limited to coffee themes for ggplot.
}
| 303 | mit |
7e5620ca528c6667c6677a5319038308e04cbe5d | armgong/DistributedR | algorithms/HPdgraph/man/HPdgraph-package.Rd | \name{HPdgraph-package}
\alias{HPdgraph-package}
\alias{HPdgraph}
\docType{package}
\title{Distributed algorithms for graph analytics}
\description{
\pkg{HPdgraph} provides distributed algorithms for graph analytics. It is written based on the infrastructure created in HP Labs for distributed computing in R.
}
\details{
\tabular{ll}{
Package: \tab HPdgraph\cr
Type: \tab Package\cr
Version: \tab 1.2.0\cr
Date: \tab 2015-01-16\cr
}
Main Functions:
\itemize{
\item {hpdpagerank:} {compute pagerank of a graph in a distributed fashion.}
\item {hpdwhich.max:} {returns the index of the maximum value stored in a darray.}
}
}
\author{
HP Vertica Analytics Team <[email protected]>
}
\references{
\enumerate{
\item{Using R for Iterative and Incremental Processing. Shivaram Venkataraman, Indrajit Roy, Alvin AuYoung, Rob Schreiber. HotCloud 2012, Boston, USA.}
}
}
\keyword{Distributed R}
\keyword{Distributed Graph Analytics}
\keyword{Big Data Analytics}
| 1,032 | gpl-2.0 |
9585ef3ae2826017c08ed82d4b022766821ca454 | zjdaye/AFNC | man/AFNC.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/AFNC.R
\name{AFNC}
\alias{AFNC}
\title{Performs AFNC}
\usage{
AFNC(p.value, alpha = 0.05, beta = 0.1, cd, c0 = NULL)
}
\arguments{
\item{p.value}{d-vector of p-values.}
\item{alpha}{level of family-wise error rate for false positive control using the Bonferroni.}
\item{beta}{level of signal missing rate for false negative control using the AFNC.}
\item{cd}{\eqn{c_d} for controlling Type I error rate under the global null, must be pre-computed using {estimate.cd}.}
\item{c0}{maximum number of the most significant p-values considered. If {NULL}, {c0} will be set to \eqn{\min( \max(5000,0.1*d), d )}, such that at least 10\% of the p-values are considered.}
}
\value{
\item{bonferroni}{Two vectors ({index}, {p.value}). {index} is a vector of indices of variables (SNPs) selected by the Bonferroni, and {p.value} is the corresponding p-values.}
\item{afnc}{Two vectors ({index}, {p.value}). {index} is a vector of indices of variables (SNPs) selected by the AFNC, and {p.value} is the corresponding p-values.}
\item{t.alpha}{Two values ({rank}, {p.value}). {rank} specifies the rank by which all variants ranked at or before \eqn{t_\alpha} are retained for controlling false positives by Bonferroni. {p.value} is the corresponding p-value, such that all variants less than or equal to {p.value} are retained.}
\item{T.fn}{Two values ({rank}, {p.value}). {rank} specifies the rank by which all variants ranked at or before \eqn{t_\alpha} are retained for adaptive false negative control (AFNC). {p.value} is the corresponding p-value, such that all variants less than or equal to {p.value} are retained.}
\item{signal.proportion}{The estimated signal proportion \eqn{\hat{\pi}}.}
\item{number.signals}{The estimated number of signals \eqn{\hat{s} = \hat{\pi} * d}.}
}
\description{
AFNC takes a vector of p-values and determines the variables (SNPs) selected by AFNC. The AFNC can retain a high proportion of signals by discarding variants from the Noise region using adaptive false negative control (Jeng et al. 2016). Proportion of signals is estimated from the data.
}
\details{
The algorithm implemented in this function is as follows. (See Jeng et al. (2016) for further details.)
\enumerate{
\item The p-values are ordered at decreasing significance.
\item The signal proportion estimator \eqn{\hat{\pi}} and estimated number of signals \eqn{\hat{s} = \hat{\pi} * d} are obtained using {estimate.signal.proportion}.
\item Two cutoff positions, \eqn{t_\alpha} and \eqn{T_fn}, are determined to separate the Signal, Indistinguishable, and Noise regions. (See Figure 1 of Jeng et al. (2016) for illustration of the Signal, Indistinguishable, and Noise regions of inference.)
\item Finally, variables (SNPs) with ordered p-values ranked at or before \eqn{t_\alpha} are selected by Bonferroni for family-wise false positive control. Variables (SNPs) with ordered p-values ranked at or before \eqn{T_fn} are selected by the AFNC procedure for adaptive false negative control.
}
}
\examples{
# Estimate signal proportions
set.seed(1)
cd = estimate.cd(d=length(p.value), alpha=0.05)
afnc = AFNC(p.value, alpha=0.05, beta=0.1, cd=cd)$afnc
selected = afnc$index # selected variables
selected.p.value = afnc$p.value # p-values of selected variables
}
\references{
Jeng, X.J., Daye, Z.J., Lu, W., and Tzeng, J.Y. (2016) Rare Variants Association Analysis in Large-Scale Sequencing Studies at the Single Locus Level.
}
| 3,573 | gpl-3.0 |
1996d9f968b77642350053818f2643bed8fb4818 | cbb280/icd9 | man/icd9IsBillable.Rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/real.R
\name{icd9IsBillable}
\alias{icd9GetBillable}
\alias{icd9GetBillableDecimal}
\alias{icd9GetBillableShort}
\alias{icd9GetNonBillable}
\alias{icd9GetNonBillableDecimal}
\alias{icd9GetNonBillableShort}
\alias{icd9IsBillable}
\alias{icd9IsBillableDecimal}
\alias{icd9IsBillableShort}
\title{Determine whether codes are billable leaf-nodes}
\usage{
icd9IsBillable(icd9, isShort = icd9GuessIsShort(icd9),
version = getLatestBillableVersion())
icd9IsBillableShort(icd9Short, version = getLatestBillableVersion())
icd9IsBillableDecimal(icd9Decimal, version = getLatestBillableVersion())
icd9GetBillable(icd9, isShort = icd9GuessIsShort(icd9), invert = FALSE,
version = getLatestBillableVersion())
icd9GetBillableShort(icd9Short, version = getLatestBillableVersion())
icd9GetBillableDecimal(icd9Decimal, version = getLatestBillableVersion())
icd9GetNonBillableShort(icd9Short, version = getLatestBillableVersion())
icd9GetNonBillableDecimal(icd9Decimal, version = getLatestBillableVersion())
icd9GetNonBillable(icd9, isShort = icd9GuessIsShort(icd9),
version = getLatestBillableVersion())
}
\arguments{
\item{icd9}{is
a character vector or factor of ICD-9 codes. If fewer than five characters
is given in a code, then the digits are greedily assigned to hundreds, then
tens, then units, before the decimal parts. E.g. "10" becomes "010", not
"0010".}
\item{isShort}{single logical value which determines whether the ICD-9 code
provided is in short (TRUE) or decimal (FALSE) form. Where reasonable, this
is guessed from the input data.}
\item{version}{single character string, default is "32" which is the latest
release from CMS. Currently anything from "23" to "32" is accepted. Not
numeric because there are possible cases with non-numeric names, e.g.
revisions within one year, although none currently implemented.}
\item{icd9Short}{is a character vector of ICD-9 codes. If fewer than
five characters is given in a code, then the digits are greedily assigned
to hundreds, then tens, then units, before the decimal parts. E.g. "10"
becomes "010", not "0010"}
\item{icd9Decimal}{character vector of ICD-9 codes. If fewer than five
characters is given in a code, then the digits are greedily assigned to
hundreds, then tens, then units, before the decimal parts. E.g. "10"
becomes "010", not "0010"}
\item{invert}{single logical value, if \code{TRUE}, then the non-billable
codes are returned. For functions with logical result, just negate with
\code{!}. Default is \code{FALSE}.}
}
\value{
logical vector of same length as input
}
\description{
Codes provided are compared to the most recent version of the
CMS list of billable codes, or another version if specified.
}
\section{Functions}{
\itemize{
\item \code{icd9IsBillableShort}: Are the given short-form codes leaf (billable)
codes in the hierarchy?
\item \code{icd9IsBillableDecimal}: Are the given decimal-form codes leaf (billable)
codes in the hierarchy?
\item \code{icd9GetBillable}: Return only those codes which are leaf (billable)
codes in the hierarchy.
\item \code{icd9GetBillableShort}: Return only those short-form codes which are leaf
(billable) codes in the hierarchy.
\item \code{icd9GetBillableDecimal}: Return only those decimal-form codes which are
leaf (billable) codes in the hierarchy.
\item \code{icd9GetNonBillableShort}: Return only those short-form codes which are not
leaf (billable) codes in the hierarchy. This would include invalid and
heading codes.
\item \code{icd9GetNonBillableDecimal}: Return only those decimal-form codes which are not
leaf (billable) codes in the hierarchy. This would include invalid and
heading codes.
\item \code{icd9GetNonBillable}: Return only those codes which are not leaf
(billable) codes in the hierarchy. This would include invalid and heading
codes. Codes are specified (or guessed) to be all decimal- or short-form.
}}
| 3,955 | gpl-3.0 |
8f5deb96694a2c208c794385462dac6e75eb5793 | averissimo/verissimo-rpackage | man/run.cache.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cache.R
\name{run.cache}
\alias{run.cache}
\title{Run function and save cache}
\usage{
run.cache(fun, ..., seed = NULL, base.dir = NULL,
cache.prefix = "generic_cache", cache.digest = list(),
show.message = NULL, force.recalc = FALSE, add.to.hash = NULL)
}
\arguments{
\item{fun}{function call name}
\item{...}{parameters for function call}
\item{seed}{when function call is random, this allows to set seed beforehand}
\item{base.dir}{directory where data is stored}
\item{cache.prefix}{prefix for file name to be generated from parameters (...)}
\item{cache.digest}{cache of the digest for one or more of the parameters}
\item{show.message}{show message that data is being retrieved from cache}
\item{force.recalc}{force the recalculation of the values}
\item{add.to.hash}{something to add to the filename generation}
}
\value{
the result of fun(...)
}
\description{
This method saves the function that's being called
}
\examples{
runCache(c, 1, 2, 3, 4)
# next three should use the same cache
runCache(c, 1, 2, 3, 4)
runCache(c, 1, 2, 3, 4, cache.digest = list(digest.cache(1)))
runCache(c, a=1, 2, c=3, 4) # should get result from cache
}
| 1,232 | gpl-3.0 |
42fe469de54b3786d82de4f73bdb34fadc47b4cd | dinilu/EPDr | man/list_taxa.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/EPDr-list_functions.R
\name{list_taxa}
\alias{list_taxa}
\title{List taxa in the EPD}
\usage{
list_taxa(connection, group_id = NULL)
}
\arguments{
\item{connection}{PostgreSQLConnection. Object of class
\code{PostgreSQLConnection} as returned by function
\code{\link[EPDr]{connect_to_epd}}.}
\item{group_id}{character. Character vector with group ids (four
character strings; see \code{\link[EPDr]{list_taxagroups}} to check
correspondence between groups and group ids). The user can specify as
many groups as desired in a single call of the function.}
}
\value{
data.frame The function returns a data frame with five
columns: \code{var_}, \code{varname}, \code{varcode}, \code{mhvar},
and \code{groupid}.
\itemize{
\item{"var_ is"}{ the taxa identification number in the database.}
\item{"varname"}{ is the taxa name.}
\item{"varcode"}{ is the short taxa name.}
\item{"mhvar"}{ is the identification number of the higher taxonomical
level.}
\item{"groupid"}{ is the identification number of the group that the
taxa belongs to.}
}
}
\description{
This function looks into the database and returns combined
information in the P_VARS and P_GROUP tables (see documentation
of the EPD:
\url{http://www.europeanpollendatabase.net/data/downloads/image/pollen-database-manual-20071011.doc}).
The function allows to restrict the list to taxa in an specific
group, using parameter \code{group_id} along with a valid connection
to the database.
}
\examples{
\dontrun{
epd.connection <- connect_to_epd(host = "localhost", database = "epd",
user = "epdr", password = "epdrpw")
list_taxa(epd.connection, "HERB")
list_taxa(epd.connection, c("HERB", "TRSH"))
}
}
| 1,780 | gpl-3.0 |
78bfc629cacf05825f83c651e357b1b9451c4dd6 | zlfccnu/econophysics | man/triplePredictSpectrum.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/triplePredictSpectrum.R
\name{triplePredictSpectrum}
\alias{triplePredictSpectrum}
\title{Compute the triplets to evaluate the complexity of the fractal spectrum}
\usage{
triplePredictSpectrum(Alpha, fAlpha, tol = 1e-04)
}
\arguments{
\item{Alpha}{the holder exponent}
\item{fAlpha}{the probability density of the holder exponent}
\item{tol}{the tolerence used to determine the x_min and x_max in the fractal spectrum}
}
\value{
A vector with three elements
}
\description{
Compute the triplets to evaluate the complexity of the fractal spectrum
}
| 628 | gpl-3.0 |
7e5620ca528c6667c6677a5319038308e04cbe5d | jorgemarsal/DistributedR | algorithms/HPdgraph/man/HPdgraph-package.Rd | \name{HPdgraph-package}
\alias{HPdgraph-package}
\alias{HPdgraph}
\docType{package}
\title{Distributed algorithms for graph analytics}
\description{
\pkg{HPdgraph} provides distributed algorithms for graph analytics. It is written based on the infrastructure created in HP Labs for distributed computing in R.
}
\details{
\tabular{ll}{
Package: \tab HPdgraph\cr
Type: \tab Package\cr
Version: \tab 1.2.0\cr
Date: \tab 2015-01-16\cr
}
Main Functions:
\itemize{
\item {hpdpagerank:} {compute pagerank of a graph in a distributed fashion.}
\item {hpdwhich.max:} {returns the index of the maximum value stored in a darray.}
}
}
\author{
HP Vertica Analytics Team <[email protected]>
}
\references{
\enumerate{
\item{Using R for Iterative and Incremental Processing. Shivaram Venkataraman, Indrajit Roy, Alvin AuYoung, Rob Schreiber. HotCloud 2012, Boston, USA.}
}
}
\keyword{Distributed R}
\keyword{Distributed Graph Analytics}
\keyword{Big Data Analytics}
| 1,032 | gpl-2.0 |
c6ba865d0edb3335bc0a69687d22e46374419b66 | thomasvangurp/epiGBS | RnBeads/r-packages/RnBeads/man/rnb.call.destructor.Rd | % Generated by roxygen2 (4.0.2): do not edit by hand
\name{rnb.call.destructor}
\alias{rnb.call.destructor}
\title{rnb.call.destructor}
\usage{
rnb.call.destructor(object, ...)
}
\arguments{
\item{object}{object to be destroyed}
\item{...}{further arguments to the method \code{\link[=destroy,RnBSet-method]{destroy}}}
}
\value{
invisible \code{TRUE}
}
\description{
calls the destructor of an RnBSet, RnBeadSet or RnBeadRawSet object
conditionally on whether the \code{enforce.destroy.disk.dumps} option is enabled.
}
\author{
Fabian Mueller
}
| 547 | mit |
1996d9f968b77642350053818f2643bed8fb4818 | wmurphyrd/icd9 | man/icd9IsBillable.Rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/real.R
\name{icd9IsBillable}
\alias{icd9GetBillable}
\alias{icd9GetBillableDecimal}
\alias{icd9GetBillableShort}
\alias{icd9GetNonBillable}
\alias{icd9GetNonBillableDecimal}
\alias{icd9GetNonBillableShort}
\alias{icd9IsBillable}
\alias{icd9IsBillableDecimal}
\alias{icd9IsBillableShort}
\title{Determine whether codes are billable leaf-nodes}
\usage{
icd9IsBillable(icd9, isShort = icd9GuessIsShort(icd9),
version = getLatestBillableVersion())
icd9IsBillableShort(icd9Short, version = getLatestBillableVersion())
icd9IsBillableDecimal(icd9Decimal, version = getLatestBillableVersion())
icd9GetBillable(icd9, isShort = icd9GuessIsShort(icd9), invert = FALSE,
version = getLatestBillableVersion())
icd9GetBillableShort(icd9Short, version = getLatestBillableVersion())
icd9GetBillableDecimal(icd9Decimal, version = getLatestBillableVersion())
icd9GetNonBillableShort(icd9Short, version = getLatestBillableVersion())
icd9GetNonBillableDecimal(icd9Decimal, version = getLatestBillableVersion())
icd9GetNonBillable(icd9, isShort = icd9GuessIsShort(icd9),
version = getLatestBillableVersion())
}
\arguments{
\item{icd9}{is
a character vector or factor of ICD-9 codes. If fewer than five characters
is given in a code, then the digits are greedily assigned to hundreds, then
tens, then units, before the decimal parts. E.g. "10" becomes "010", not
"0010".}
\item{isShort}{single logical value which determines whether the ICD-9 code
provided is in short (TRUE) or decimal (FALSE) form. Where reasonable, this
is guessed from the input data.}
\item{version}{single character string, default is "32" which is the latest
release from CMS. Currently anything from "23" to "32" is accepted. Not
numeric because there are possible cases with non-numeric names, e.g.
revisions within one year, although none currently implemented.}
\item{icd9Short}{is a character vector of ICD-9 codes. If fewer than
five characters is given in a code, then the digits are greedily assigned
to hundreds, then tens, then units, before the decimal parts. E.g. "10"
becomes "010", not "0010"}
\item{icd9Decimal}{character vector of ICD-9 codes. If fewer than five
characters is given in a code, then the digits are greedily assigned to
hundreds, then tens, then units, before the decimal parts. E.g. "10"
becomes "010", not "0010"}
\item{invert}{single logical value, if \code{TRUE}, then the non-billable
codes are returned. For functions with logical result, just negate with
\code{!}. Default is \code{FALSE}.}
}
\value{
logical vector of same length as input
}
\description{
Codes provided are compared to the most recent version of the
CMS list of billable codes, or another version if specified.
}
\section{Functions}{
\itemize{
\item \code{icd9IsBillableShort}: Are the given short-form codes leaf (billable)
codes in the hierarchy?
\item \code{icd9IsBillableDecimal}: Are the given decimal-form codes leaf (billable)
codes in the hierarchy?
\item \code{icd9GetBillable}: Return only those codes which are leaf (billable)
codes in the hierarchy.
\item \code{icd9GetBillableShort}: Return only those short-form codes which are leaf
(billable) codes in the hierarchy.
\item \code{icd9GetBillableDecimal}: Return only those decimal-form codes which are
leaf (billable) codes in the hierarchy.
\item \code{icd9GetNonBillableShort}: Return only those short-form codes which are not
leaf (billable) codes in the hierarchy. This would include invalid and
heading codes.
\item \code{icd9GetNonBillableDecimal}: Return only those decimal-form codes which are not
leaf (billable) codes in the hierarchy. This would include invalid and
heading codes.
\item \code{icd9GetNonBillable}: Return only those codes which are not leaf
(billable) codes in the hierarchy. This would include invalid and heading
codes. Codes are specified (or guessed) to be all decimal- or short-form.
}}
| 3,955 | gpl-3.0 |
7e5620ca528c6667c6677a5319038308e04cbe5d | vertica/DistributedR | algorithms/HPdgraph/man/HPdgraph-package.Rd | \name{HPdgraph-package}
\alias{HPdgraph-package}
\alias{HPdgraph}
\docType{package}
\title{Distributed algorithms for graph analytics}
\description{
\pkg{HPdgraph} provides distributed algorithms for graph analytics. It is written based on the infrastructure created in HP Labs for distributed computing in R.
}
\details{
\tabular{ll}{
Package: \tab HPdgraph\cr
Type: \tab Package\cr
Version: \tab 1.2.0\cr
Date: \tab 2015-01-16\cr
}
Main Functions:
\itemize{
\item {hpdpagerank:} {compute pagerank of a graph in a distributed fashion.}
\item {hpdwhich.max:} {returns the index of the maximum value stored in a darray.}
}
}
\author{
HP Vertica Analytics Team <[email protected]>
}
\references{
\enumerate{
\item{Using R for Iterative and Incremental Processing. Shivaram Venkataraman, Indrajit Roy, Alvin AuYoung, Rob Schreiber. HotCloud 2012, Boston, USA.}
}
}
\keyword{Distributed R}
\keyword{Distributed Graph Analytics}
\keyword{Big Data Analytics}
| 1,032 | gpl-2.0 |
a185dacd61ac3d3b73603a461b371048359237f7 | quantifish/TagGrowth | man/plot_annual_devs.Rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/plot_annual_devs.R
\name{plot_annual_devs}
\alias{plot_annual_devs}
\title{Plot annual deviates}
\usage{
plot_annual_devs(report, year0 = 1972, file_name = "REs_y")
}
\description{
Plot annual deviates
}
| 291 | mit |
c6ba865d0edb3335bc0a69687d22e46374419b66 | thomasvangurp/epiGBS | RnBeads/RnBeads/man/rnb.call.destructor.Rd | % Generated by roxygen2 (4.0.2): do not edit by hand
\name{rnb.call.destructor}
\alias{rnb.call.destructor}
\title{rnb.call.destructor}
\usage{
rnb.call.destructor(object, ...)
}
\arguments{
\item{object}{object to be destroyed}
\item{...}{further arguments to the method \code{\link[=destroy,RnBSet-method]{destroy}}}
}
\value{
invisible \code{TRUE}
}
\description{
calls the destructor of an RnBSet, RnBeadSet or RnBeadRawSet object
conditionally on whether the \code{enforce.destroy.disk.dumps} option is enabled.
}
\author{
Fabian Mueller
}
| 547 | mit |
da21af69d4354008234604f44cc7885fdc780083 | cran/magma | man/magmaQR-class.Rd | \name{magmaQR-class}
\Rdversion{1.1}
\docType{class}
\alias{magmaQR-class}
\alias{show,magmaQR-method}
\alias{solve,magmaQR,matrix-method}
\alias{solve,magmaQR,missing-method}
\alias{solve,magmaQR,numeric-method}
\title{Class "magmaQR"}
\description{
Storage for a QR decomposition as computed for a magma matrix using the \code{\link{qr}} function.
}
\section{Objects from the Class}{
Objects can be created by calls of the form \code{new("magmaQR", ...)}. More commonly the objects are created explicitly from calls of the form \code{\link{qr}(x)} where \code{x} is an object that inherits from the \code{"magma"} class or as a side-effect of other functions applied to \code{"magma"} objects.
}
\section{Slots}{
\describe{
\item{\code{work}:}{Object of class \code{"numeric"}.}
\item{\code{.S3Class}:}{Object of class \code{"qr"}.}
}
}
\section{Extends}{
Class \code{"\link{qr}"}, directly.
Class \code{"\linkS4class{oldClass}"}, by class "qr", distance 2.
}
\section{Methods}{
\describe{
\item{show}{\code{signature(object = "magmaQR")}: ... }
\item{solve}{\code{signature(a = "magmaQR", b = "matrix")}: ... }
\item{solve}{\code{signature(a = "magmaQR", b = "missing")}: ... }
\item{solve}{\code{signature(a = "magmaQR", b = "numeric")}: ... }
}
}
\author{
Brian J. Smith <[email protected]>
}
\references{
Stanimire Tomov, Rajib Nath, Hatem Ltaief, and Jack Dongarra (2010)
\emph{Dense Linear Algebra Solvers for Multicore with {GPU} Accelerators},
Proceedings of IPDPS 2010: 24th IEEE International Parallel and Distributed Processing Symposium,
Atlanta, GA, April 2010
(\url{http://www.netlib.org/netlib/utk/people/JackDongarra/PAPERS/lawn225.pdf}).
}
\seealso{
\code{\linkS4class{magma}},
\code{\link{qr}}
}
\examples{
mA <- magma(c(1, 0.4, 0.2, 0.4, 1, 0.3, 0.2, 0.3, 1), 3, 3)
y <- c(1, 2, 3)
## magmaQR object
QR <- qr(mA)
## solution to A \%*\% x = y
x <- solve(QR, y)
## check solution
val <- mA \%*\% x
all.equal(as.numeric(val), as.numeric(y))
}
\keyword{classes}
\keyword{algebra}
| 2,050 | gpl-3.0 |
d7cff1f07af4dec0d433be5108c3d067230684ce | nclark-lab/RERconverge | man/plotRers.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plottingFuncs.R
\name{plotRers}
\alias{plotRers}
\title{Plot the residuals reflecting the relative evolutionary rates (RERs) of a gene across
species present in the gene tree}
\usage{
plotRers(
rermat = NULL,
index = NULL,
phenv = NULL,
rers = NULL,
method = "k",
xlims = NULL,
plot = 1,
xextend = 0.2,
sortrers = F
)
}
\arguments{
\item{rermat.}{A residual matrix, output of the getAllResiduals() function}
\item{index.}{A character denoting the name of gene, or a numeric value corresponding to the gene's row index in the residuals matrix}
\item{phenv.}{A phenotype vector returned by \code{\link{tree2Paths}} or \code{\link{foreground2Paths}}}
}
\value{
A plot of the RERs with foreground species labelled in red, and the rest in blue
}
\description{
Plot the residuals reflecting the relative evolutionary rates (RERs) of a gene across
species present in the gene tree
}
| 974 | gpl-3.0 |
da21af69d4354008234604f44cc7885fdc780083 | brian-j-smith/magma | man/magmaQR-class.Rd | \name{magmaQR-class}
\Rdversion{1.1}
\docType{class}
\alias{magmaQR-class}
\alias{show,magmaQR-method}
\alias{solve,magmaQR,matrix-method}
\alias{solve,magmaQR,missing-method}
\alias{solve,magmaQR,numeric-method}
\title{Class "magmaQR"}
\description{
Storage for a QR decomposition as computed for a magma matrix using the \code{\link{qr}} function.
}
\section{Objects from the Class}{
Objects can be created by calls of the form \code{new("magmaQR", ...)}. More commonly the objects are created explicitly from calls of the form \code{\link{qr}(x)} where \code{x} is an object that inherits from the \code{"magma"} class or as a side-effect of other functions applied to \code{"magma"} objects.
}
\section{Slots}{
\describe{
\item{\code{work}:}{Object of class \code{"numeric"}.}
\item{\code{.S3Class}:}{Object of class \code{"qr"}.}
}
}
\section{Extends}{
Class \code{"\link{qr}"}, directly.
Class \code{"\linkS4class{oldClass}"}, by class "qr", distance 2.
}
\section{Methods}{
\describe{
\item{show}{\code{signature(object = "magmaQR")}: ... }
\item{solve}{\code{signature(a = "magmaQR", b = "matrix")}: ... }
\item{solve}{\code{signature(a = "magmaQR", b = "missing")}: ... }
\item{solve}{\code{signature(a = "magmaQR", b = "numeric")}: ... }
}
}
\author{
Brian J. Smith <[email protected]>
}
\references{
Stanimire Tomov, Rajib Nath, Hatem Ltaief, and Jack Dongarra (2010)
\emph{Dense Linear Algebra Solvers for Multicore with {GPU} Accelerators},
Proceedings of IPDPS 2010: 24th IEEE International Parallel and Distributed Processing Symposium,
Atlanta, GA, April 2010
(\url{http://www.netlib.org/netlib/utk/people/JackDongarra/PAPERS/lawn225.pdf}).
}
\seealso{
\code{\linkS4class{magma}},
\code{\link{qr}}
}
\examples{
mA <- magma(c(1, 0.4, 0.2, 0.4, 1, 0.3, 0.2, 0.3, 1), 3, 3)
y <- c(1, 2, 3)
## magmaQR object
QR <- qr(mA)
## solution to A \%*\% x = y
x <- solve(QR, y)
## check solution
val <- mA \%*\% x
all.equal(as.numeric(val), as.numeric(y))
}
\keyword{classes}
\keyword{algebra}
| 2,050 | gpl-3.0 |
3223f8ec7413049117961dad109f57462735ce17 | sfr/RStudio-Addin-Snippets | man/resolve.selection.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/insert.pipe.R
\name{resolve.selection}
\alias{resolve.selection}
\title{Process selection}
\usage{
resolve.selection(context, position, indentation)
}
\arguments{
\item{context}{Actual document context}
\item{position}{document_range to resolve}
\item{indentation}{value of \code{.rs.readUiPref('num_spaces_for_tab')}}
}
\value{
The list of 3 values:
\describe{
\item{range}{\code{document_range} which will be replaced by
the \code{text}}
\item{text}{new text to replace the \code{range}}
\item{column}{new end of the original selection}
}
}
\description{
Method will resolve one selection according to rules specified
in \code{\link{insert.pipe}}
}
\details{
The returned \code{column} is not the same as the end of the
\code{range}, becuse range will end at the edn of the line. The
\code{column} is important for calculating the expansion or
shinkage of the original range (\code{position}), in the case there
are multiple selections in the file. Next selection could even start
at that very \code{column}.
}
| 1,205 | gpl-2.0 |
fbc7e51c151082c07797358d6820de0c18349f57 | asgr/LaplacesDemonCpp | man/Mode.Rd | \name{Mode}
\alias{is.amodal}
\alias{is.bimodal}
\alias{is.multimodal}
\alias{is.trimodal}
\alias{is.unimodal}
\alias{Mode}
\alias{Modes}
\title{The Mode(s) of a Vector}
\description{
The mode is a measure of central tendency. It is the value that occurs
most frequently, or in a continuous probability distribution, it is
the value with the most density. A distribution may have no modes
(such as with a constant, or in a uniform distribution when no value
occurs more frequently than any other), or one or more modes.
}
\usage{
is.amodal(x, min.size=0.1)
is.bimodal(x, min.size=0.1)
is.multimodal(x, min.size=0.1)
is.trimodal(x, min.size=0.1)
is.unimodal(x, min.size=0.1)
Mode(x)
Modes(x, min.size=0.1)
}
\arguments{
\item{x}{This is a vector in which a mode (or modes) will be sought.}
\item{min.size}{This is the minimum size that can be considered a
mode, where size means the proportion of the distribution between
areas of increasing kernel density estimates.}
}
\details{
The \code{is.amodal} function is a logical test of whether or not
\code{x} has a mode. If \code{x} has a mode, then \code{TRUE} is
returned, otherwise \code{FALSE}.
The \code{is.bimodal} function is a logical test of whether or not
\code{x} has two modes. If \code{x} has two modes, then \code{TRUE}
is returned, otherwise \code{FALSE}.
The \code{is.multimodal} function is a logical test of whether or not
\code{x} has multiple modes. If \code{x} has multiple modes, then
\code{TRUE} is returned, otherwise \code{FALSE}.
The \code{is.trimodal} function is a logical test of whether or not
\code{x} has three modes. If \code{x} has three modes, then \code{TRUE}
is returned, otherwise \code{FALSE}.
The \code{is.unimodal} function is a logical test of whether or not
\code{x} has one mode. If \code{x} has one mode, then \code{TRUE}
is returned, otherwise \code{FALSE}.
The \code{Mode} function returns the most frequent value when \code{x}
is discrete. If \code{x} is a constant, then it is considered amodal,
and \code{NA} is returned. If multiple modes exist, this function
returns only the mode with the highest density, or if two or more
modes have the same density, then it returns the first mode found.
Otherwise, the \code{Mode} function returns the value of \code{x}
associated with the highest kernel density estimate, or the first
one found if multiple modes have the same density.
The \code{Modes} function is a simple, deterministic function that
differences the kernel density of \code{x} and reports a number of
modes equal to half the number of changes in direction, although the
\code{min.size} function can be used to reduce the number of modes
returned, and defaults to 0.1, eliminating modes that do not have at
least 10\% of the distributional area. The \code{Modes} function
returns a list with three components: \code{modes}, \code{modes.dens},
and \code{size}. The elements in each component are ordered according
to the decreasing density of the modes. The \code{modes} component is
a vector of the values of \code{x} associated with the modes. The
\code{modes.dens} component is a vector of the kernel density
estimates at the modes. The \code{size} component is a vector of the
proportion of area underneath each mode.
The \code{\link{IterativeQuadrature}},
\code{\link{LaplaceApproximation}}, and \code{\link{VariationalBayes}}
functions characterize the marginal posterior distributions by
posterior modes (means) and variance. A related topic is MAP or
maximum \emph{a posteriori} estimation.
Otherwise, the results of Bayesian inference tend to report the
posterior mean or median, along with probability intervals (see
\code{\link{p.interval}} and \code{\link{LPL.interval}}), rather than
posterior modes. In many types of models, such as mixture models, the
posterior may be multimodal. In such a case, the usual recommendation
is to choose the highest mode if feasible and possible. However, the
highest mode may be uncharacteristic of the majority of the posterior.
}
\author{Statisticat, LLC. \email{[email protected]}}
\seealso{
\code{\link{IterativeQuadrature}},
\code{\link{LaplaceApproximation}},
\code{\link{LaplacesDemon}},
\code{\link{LPL.interval}},
\code{\link{p.interval}}, and
\code{\link{VariationalBayes}}.
}
\examples{
library(LaplacesDemonCpp)
### Below are distributions with different numbers of modes.
x <- c(1,1) #Amodal
x <- c(1,2,2,2,3) #Unimodal
x <- c(1,2) #Bimodal
x <- c(1,3,3,3,3,4,4,4,4,4) #min.size affects the answer
x <- c(1,1,3,3,3,3,4,4,4,4,4) #Trimodal
### And for each of the above, the functions below may be applied.
Mode(x)
Modes(x)
is.amodal(x)
is.bimodal(x)
is.multimodal(x)
is.trimodal(x)
is.unimodal(x)
}
\keyword{Mode}
\keyword{Utility} | 4,966 | mit |
fbc7e51c151082c07797358d6820de0c18349f57 | lazycrazyowl/LaplacesDemonCpp | man/Mode.Rd | \name{Mode}
\alias{is.amodal}
\alias{is.bimodal}
\alias{is.multimodal}
\alias{is.trimodal}
\alias{is.unimodal}
\alias{Mode}
\alias{Modes}
\title{The Mode(s) of a Vector}
\description{
The mode is a measure of central tendency. It is the value that occurs
most frequently, or in a continuous probability distribution, it is
the value with the most density. A distribution may have no modes
(such as with a constant, or in a uniform distribution when no value
occurs more frequently than any other), or one or more modes.
}
\usage{
is.amodal(x, min.size=0.1)
is.bimodal(x, min.size=0.1)
is.multimodal(x, min.size=0.1)
is.trimodal(x, min.size=0.1)
is.unimodal(x, min.size=0.1)
Mode(x)
Modes(x, min.size=0.1)
}
\arguments{
\item{x}{This is a vector in which a mode (or modes) will be sought.}
\item{min.size}{This is the minimum size that can be considered a
mode, where size means the proportion of the distribution between
areas of increasing kernel density estimates.}
}
\details{
The \code{is.amodal} function is a logical test of whether or not
\code{x} has a mode. If \code{x} has a mode, then \code{TRUE} is
returned, otherwise \code{FALSE}.
The \code{is.bimodal} function is a logical test of whether or not
\code{x} has two modes. If \code{x} has two modes, then \code{TRUE}
is returned, otherwise \code{FALSE}.
The \code{is.multimodal} function is a logical test of whether or not
\code{x} has multiple modes. If \code{x} has multiple modes, then
\code{TRUE} is returned, otherwise \code{FALSE}.
The \code{is.trimodal} function is a logical test of whether or not
\code{x} has three modes. If \code{x} has three modes, then \code{TRUE}
is returned, otherwise \code{FALSE}.
The \code{is.unimodal} function is a logical test of whether or not
\code{x} has one mode. If \code{x} has one mode, then \code{TRUE}
is returned, otherwise \code{FALSE}.
The \code{Mode} function returns the most frequent value when \code{x}
is discrete. If \code{x} is a constant, then it is considered amodal,
and \code{NA} is returned. If multiple modes exist, this function
returns only the mode with the highest density, or if two or more
modes have the same density, then it returns the first mode found.
Otherwise, the \code{Mode} function returns the value of \code{x}
associated with the highest kernel density estimate, or the first
one found if multiple modes have the same density.
The \code{Modes} function is a simple, deterministic function that
differences the kernel density of \code{x} and reports a number of
modes equal to half the number of changes in direction, although the
\code{min.size} function can be used to reduce the number of modes
returned, and defaults to 0.1, eliminating modes that do not have at
least 10\% of the distributional area. The \code{Modes} function
returns a list with three components: \code{modes}, \code{modes.dens},
and \code{size}. The elements in each component are ordered according
to the decreasing density of the modes. The \code{modes} component is
a vector of the values of \code{x} associated with the modes. The
\code{modes.dens} component is a vector of the kernel density
estimates at the modes. The \code{size} component is a vector of the
proportion of area underneath each mode.
The \code{\link{IterativeQuadrature}},
\code{\link{LaplaceApproximation}}, and \code{\link{VariationalBayes}}
functions characterize the marginal posterior distributions by
posterior modes (means) and variance. A related topic is MAP or
maximum \emph{a posteriori} estimation.
Otherwise, the results of Bayesian inference tend to report the
posterior mean or median, along with probability intervals (see
\code{\link{p.interval}} and \code{\link{LPL.interval}}), rather than
posterior modes. In many types of models, such as mixture models, the
posterior may be multimodal. In such a case, the usual recommendation
is to choose the highest mode if feasible and possible. However, the
highest mode may be uncharacteristic of the majority of the posterior.
}
\author{Statisticat, LLC. \email{[email protected]}}
\seealso{
\code{\link{IterativeQuadrature}},
\code{\link{LaplaceApproximation}},
\code{\link{LaplacesDemon}},
\code{\link{LPL.interval}},
\code{\link{p.interval}}, and
\code{\link{VariationalBayes}}.
}
\examples{
library(LaplacesDemonCpp)
### Below are distributions with different numbers of modes.
x <- c(1,1) #Amodal
x <- c(1,2,2,2,3) #Unimodal
x <- c(1,2) #Bimodal
x <- c(1,3,3,3,3,4,4,4,4,4) #min.size affects the answer
x <- c(1,1,3,3,3,3,4,4,4,4,4) #Trimodal
### And for each of the above, the functions below may be applied.
Mode(x)
Modes(x)
is.amodal(x)
is.bimodal(x)
is.multimodal(x)
is.trimodal(x)
is.unimodal(x)
}
\keyword{Mode}
\keyword{Utility} | 4,966 | mit |
76a26b61c93a61790a2de5b17187601badb05581 | QuanliWang/SynthHousehold | R/NestedCategBayesImpute/man/UpdateBeta.Rd | \name{UpdateBeta}
\alias{UpdateBeta}
\title{
Update beta.
}
\description{
Update beta -- the concentration parameter in the Dirichlet process for the individual-level latent classes. Currently, this is assumed to be the same within all group-level classes.
}
\usage{
UpdateBeta(ba, bb, v)
}
\arguments{
\item{ba}{
Hyper-parameter a for beta.
}
\item{bb}{
Hyper-parameter b for beta.
}
\item{v}{
Matrix of the beta-distributed variables in the stick breaking representation of the individual-level latent classes by the group-level latent classes.
}
}
\value{
Updated (posterior) value for beta based on the corresponding full conditional distribution..
}
\author{
Quanli Wang
}
\keyword{ sampler }
| 714 | gpl-3.0 |
3078718e46a83edf8e1040554a6f0420e3d1677c | thomasvangurp/epiGBS | RnBeads/r-packages/RnBeads/man/performEnrichment.diffMeth.Rd | % Generated by roxygen2 (4.0.2): do not edit by hand
\name{performEnrichment.diffMeth}
\alias{performEnrichment.diffMeth}
\title{performEnrichment.diffMeth}
\usage{
performEnrichment.diffMeth(rnbSet, diffmeth, ontologies = c("BP", "MF"),
rank.cuts.region = c(100, 500, 1000), add.auto.rank.cut = TRUE,
rerank = TRUE, verbose = TRUE, ...)
}
\arguments{
\item{rnbSet}{RnBSet object for which dirrential methylation was computed}
\item{diffmeth}{RnBDiffMeth object. See \code{\link{RnBDiffMeth-class}} for details.}
\item{ontologies}{GO ontologies to use for enrichment analysis}
\item{rank.cuts.region}{Cutoffs for combined ranking that are used to determine differentially methylated regions}
\item{add.auto.rank.cut}{flag indicating whether an automatically computed cut-off should also be considered.}
\item{rerank}{For deterimining differential methylation: should the ranks be ranked again or should the absolute ranks be used.}
\item{verbose}{Enable for detailed status report}
\item{...}{arguments passed on to the parameters of \code{GOHyperGParams} from the \code{GOstats} package}
}
\value{
a DiffMeth.enrich object (S3) containing the following attributes
\item{region}{Enrichment information for differential methylation on the region level. See \code{GOHyperGresult} from the \code{GOstats} package for furthert details}
}
\description{
performs Geno Ontology (GO) enrichment analysis for a given differential methylation table.
}
\examples{
\dontrun{
library(RnBeads.hg19)
data(small.example.object)
logger.start(fname=NA)
dm <- rnb.execute.computeDiffMeth(rnb.set.example,pheno.cols=c("Sample_Group","Treatment"))
res <- performEnrichment.diffMeth(rnb.set.example,dm)
}
}
\author{
Fabian Mueller
}
| 1,723 | mit |
2b6ad59f2913137f50bdf2ac7a8ce7a689fc50e9 | Arroita/streamMetabolizer | man/get_version.metab_model.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/metab_model-class.R
\name{get_version.metab_model}
\alias{get_version.metab_model}
\title{Retrieve the version of streamMetabolizer that was used to fit the model}
\usage{
\method{get_version}{metab_model}(metab_model)
}
\arguments{
\item{metab_model}{A metabolism model, implementing the
metab_model_interface, for which to return the data}
}
\description{
Retrieve the version of streamMetabolizer that was used to fit the model
}
\seealso{
Other get_version: \code{\link{get_version}}
}
| 570 | cc0-1.0 |
6d3c1f7420a84d4bd019f705a35e2257aa873d84 | jbrzusto/motus-R-package | man/getRecvDBPath.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/getRecvDBPath.R
\name{getRecvDBPath}
\alias{getRecvDBPath}
\title{Get the path to a receiver database given its serial number.}
\usage{
getRecvDBPath(serno, dbdir = MOTUS_PATH$RECV)
}
\arguments{
\item{serno}{receiver serial number}
\item{dbdir}{path to folder with existing receiver databases
Default: \code{MOTUS_PATH$RECV}}
}
\value{
a character scalar giving the full path to the receiver database,
or NULL if \code{serno} is not a valid receiver serial number
}
\description{
receiver database files are stored in a single directory, and
have names like "SG-XXXXBBBKYYYY.motus" or "Lotek-NNNNN.motus"
}
\author{
John Brzustowski \email{jbrzusto@REMOVE_THIS_PART_fastmail.fm}
}
| 761 | gpl-2.0 |
dff7b203d6d8701077d14a2f20384c981ff202e4 | ronkeizer/PKmisc | man/pk_2cmt_t12_interval.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/pk_2cmt_t12_interval.R
\name{pk_2cmt_t12_interval}
\alias{pk_2cmt_t12_interval}
\title{Calculate average half-life for 2-compartment model during a specific interval}
\usage{
pk_2cmt_t12_interval(CL = 3, V = 30, Q = 2, V2 = 20, tau = 12,
t_inf = NULL)
}
\arguments{
\item{CL}{clearance}
\item{V}{volume of central compartment}
\item{Q}{inter-compartimental clearance}
\item{V2}{volume of peripheral compartment}
\item{tau}{interval (hours)}
\item{t_inf}{infusion time (hours)}
}
\description{
Calculate average half-life for 2-compartment model during a specific interval
}
| 659 | mit |
3078718e46a83edf8e1040554a6f0420e3d1677c | thomasvangurp/epiGBS | RnBeads/RnBeads/man/performEnrichment.diffMeth.Rd | % Generated by roxygen2 (4.0.2): do not edit by hand
\name{performEnrichment.diffMeth}
\alias{performEnrichment.diffMeth}
\title{performEnrichment.diffMeth}
\usage{
performEnrichment.diffMeth(rnbSet, diffmeth, ontologies = c("BP", "MF"),
rank.cuts.region = c(100, 500, 1000), add.auto.rank.cut = TRUE,
rerank = TRUE, verbose = TRUE, ...)
}
\arguments{
\item{rnbSet}{RnBSet object for which dirrential methylation was computed}
\item{diffmeth}{RnBDiffMeth object. See \code{\link{RnBDiffMeth-class}} for details.}
\item{ontologies}{GO ontologies to use for enrichment analysis}
\item{rank.cuts.region}{Cutoffs for combined ranking that are used to determine differentially methylated regions}
\item{add.auto.rank.cut}{flag indicating whether an automatically computed cut-off should also be considered.}
\item{rerank}{For deterimining differential methylation: should the ranks be ranked again or should the absolute ranks be used.}
\item{verbose}{Enable for detailed status report}
\item{...}{arguments passed on to the parameters of \code{GOHyperGParams} from the \code{GOstats} package}
}
\value{
a DiffMeth.enrich object (S3) containing the following attributes
\item{region}{Enrichment information for differential methylation on the region level. See \code{GOHyperGresult} from the \code{GOstats} package for furthert details}
}
\description{
performs Geno Ontology (GO) enrichment analysis for a given differential methylation table.
}
\examples{
\dontrun{
library(RnBeads.hg19)
data(small.example.object)
logger.start(fname=NA)
dm <- rnb.execute.computeDiffMeth(rnb.set.example,pheno.cols=c("Sample_Group","Treatment"))
res <- performEnrichment.diffMeth(rnb.set.example,dm)
}
}
\author{
Fabian Mueller
}
| 1,723 | mit |
6d3c1f7420a84d4bd019f705a35e2257aa873d84 | jbrzusto/motusServer | man/getRecvDBPath.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/getRecvDBPath.R
\name{getRecvDBPath}
\alias{getRecvDBPath}
\title{Get the path to a receiver database given its serial number.}
\usage{
getRecvDBPath(serno, dbdir = MOTUS_PATH$RECV)
}
\arguments{
\item{serno}{receiver serial number}
\item{dbdir}{path to folder with existing receiver databases
Default: \code{MOTUS_PATH$RECV}}
}
\value{
a character scalar giving the full path to the receiver database,
or NULL if \code{serno} is not a valid receiver serial number
}
\description{
receiver database files are stored in a single directory, and
have names like "SG-XXXXBBBKYYYY.motus" or "Lotek-NNNNN.motus"
}
\author{
John Brzustowski \email{jbrzusto@REMOVE_THIS_PART_fastmail.fm}
}
| 761 | gpl-2.0 |
fb3107ded2c83969fe9ecd27e09f9825ea88c4d7 | ecjbosu/fSEAL | PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd | \name{MultiBetaDrawdown}
\alias{MultiBetaDrawdown}
\title{Drawdown Beta for Multiple path}
\usage{
MultiBetaDrawdown(R, Rm, sample, ps, p = 0.95,
weights = NULL, geometric = TRUE,
type = c("alpha", "average", "max"), ...)
}
\arguments{
\item{R}{an xts, vector, matrix, data frame, timeSeries
or zoo object of asset returns}
\item{Rm}{Return series of the optimal portfolio an xts,
vector, matrix, data frame, timeSeries or zoo object of
asset returns}
\item{sample}{The number of sample paths in the return
series}
\item{p}{confidence level for calculation
,default(p=0.95)}
\item{ps}{The probability for each sample path}
\item{weights}{portfolio weighting vector, default NULL,
see Details}
\item{geometric}{utilize geometric chaining (TRUE) or
simple/arithmetic chaining (FALSE) to aggregate returns,
default TRUE}
\item{type}{The type of BetaDrawdown if specified alpha
then the alpha value given is taken (default 0.95). If
"average" then alpha = 0 and if "max" then alpha = 1 is
taken.}
\item{\dots}{any passthru variable.}
}
\description{
The drawdown beta is formulated as follows
\deqn{\beta_DD^i =
\frac{{\sum_{s=1}^S}{\sum_{t=1}^T}p_s{q_t^{*}}{(w_{s,k^{*}(s,t)^i}-w_{st}^i)}}{D_{\alpha}(w^M)}}
here \eqn{\beta_DD} is the drawdown beta of the
instrument for multiple sample path.
\eqn{k^{*}(s,t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_{sk}^p(x^{*})}
The numerator in \eqn{\beta_DD} is the average rate of
return of the instrument over time periods corresponding
to the \eqn{(1-\alpha)T} largest drawdowns of the optimal
portfolio, where \eqn{w_t - w_k^{*}(t)} is the cumulative
rate of return of the instrument from the optimal
portfolio peak time \eqn{k^{*}(t)} to time t.
The difference in CDaR and standard betas can be
explained by the conceptual difference in beta
definitions: the standard beta accounts for the fund
returns over the whole return history, including the
periods when the market goes up, while CDaR betas focus
only on market drawdowns and, thus, are not affected when
the market performs well.
}
\examples{
data(edhec)
MultiBetaDrawdown(cbind(edhec,edhec),cbind(edhec[,2],edhec[,2]),sample = 2,ps=c(0.4,0.6))
BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431
}
\author{
Pulkit Mehrotra
}
\references{
Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital
Asset Pricing Model (CAPM) with Drawdown Measure.Research
Report 2012-9, ISE Dept., University of Florida,September
2012.
}
\seealso{
\code{\link{ES}} \code{\link{maxDrawdown}}
\code{\link{CdarMultiPath}} \code{\link{AlphaDrawdown}}
\code{\link{CDaR}} \code{\link{BetaDrawdown}}
}
| 2,690 | gpl-2.0 |
9cb3fb00b5320bbab0fc32d090570617d0352f9e | giabaio/SWSamp | man/SWSamp-package.Rd | \name{SWSamp-package}
\alias{SWSamp-package}
\alias{SWSamp}
\docType{package}
\title{
SWSamp
}
\description{
Sample size calculations for a Stepped Wedge Trial
}
\details{
\tabular{ll}{
Package: \tab SWSamp\cr
Type: \tab Package\cr
Version: \tab 0.3.1\cr
Date: \tab 2019-01-30\cr
License: \tab GPL2 \cr
LazyLoad: \tab yes\cr
}
The package provides a suite of function to compute the power for a Stepped Wedge Design
under different assumptions. The package can generate power based on simulations or use
closed-formulae based on Hussey et Hughes
}
\author{
Gianluca Baio
Maintainer: Gianluca Baio ([email protected])
}
\references{
Baio, G; Copas, A; Ambler, G; Hargreaves, J; Beard, E; and Omar, RZ Sample size calculation for
a stepped wedge trial. Trials, 16:354. Aug 2015.
Hussey M and Hughes J. Design and analysis of stepped wedge cluster randomized trials.
Contemporary Clinical Trials. 28(2):182-91. Epub 2006 Jul 7. Feb 2007
}
\keyword{Sample size calculations}
\keyword{Stepped Wedge Design}
| 1,005 | gpl-3.0 |
ed7654129322cf2f9e58316eaadf4dea953607a7 | mevers/RNAModR | man/GetDistNearestStartStop.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/helper.R
\name{GetDistNearestStartStop}
\alias{GetDistNearestStartStop}
\title{Get distance of sites to the nearest start/stop codon.}
\usage{
GetDistNearestStartStop(txLoc)
}
\arguments{
\item{txLoc}{A \code{txLoc} object.}
}
\value{
A \code{list}. See 'Details'.
}
\description{
Get distance of sites from a \code{txLoc} to the nearest start/stop codon.
}
\details{
The function converts transcript region coordinates from a \code{txLoc} to
transcript coordinates, and returns distances of sites to the nearest
start/stop codon. A transcript is defined as the concatenation of
the following transcript regions: 5'UTR, CDS, 3'UTR. The return object is
a \code{list} of distances of sites from \code{txLoc} to the nearest start
and stop codons.
}
\author{
Maurits Evers, \email{[email protected]}
}
\keyword{internal}
| 906 | gpl-3.0 |
cb0b391c7776f8a77999e91f956ba66ecd3a0c0c | Beothuk/bio.utilities | man/nearest.sorted.angles.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/nearest.sorted.angles.r
\name{nearest.sorted.angles}
\alias{nearest.sorted.angles}
\title{nearest.sorted.angles}
\usage{
nearest.sorted.angles(plist, hull, k, eps = pi/4, ub = NULL)
}
\description{
unknown
}
\seealso{
Other poorly documented: \code{\link{A}},
\code{\link{applyMean}}, \code{\link{applySum}},
\code{\link{break.list}}, \code{\link{by2}},
\code{\link{colour.scale}}, \code{\link{compute.sums}},
\code{\link{convert.graphics.format}},
\code{\link{convert2factor}}, \code{\link{dim_list}},
\code{\link{duplicates.toremove}},
\code{\link{infinite2NA}}, \code{\link{na.zero}}
}
\author{
unknown, \email{<unknown>@dfo-mpo.gc.ca}
}
| 733 | mit |
0a7fe40f9043f1bf63c94195846f79ed3ab25a9a | johngarvin/R-2.1.1rcc | src/library/datasets/man/USPersonalExpenditure.Rd | \name{USPersonalExpenditure}
\docType{data}
\alias{USPersonalExpenditure}
\title{Personal Expenditure Data}
\description{
This data set consists of United States personal expenditures (in
billions of dollars) in the categories; food and tobacco, household
operation, medical and health, personal care, and private education
for the years 1940, 1945, 1950, 1955 and 1960.
}
\usage{USPersonalExpenditure}
\format{A matrix with 5 rows and 5 columns.}
\source{The World Almanac and Book of Facts, 1962, page 756.}
\references{
Tukey, J. W. (1977)
\emph{Exploratory Data Analysis}.
Addison-Wesley.
McNeil, D. R. (1977)
\emph{Interactive Data Analysis}.
Wiley.
}
\examples{
require(stats) # for medpolish
USPersonalExpenditure
medpolish(log10(USPersonalExpenditure))
}
\keyword{datasets}
| 803 | gpl-2.0 |
ea4db8d670ea95fc70b892d3f51b5e05864f41bf | cities-lab/VisionEval | sources/framework/visioneval/man/runModule.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/visioneval.R
\name{runModule}
\alias{runModule}
\title{Run module.}
\usage{
runModule(ModuleName, PackageName, RunFor, RunYear)
}
\arguments{
\item{ModuleName}{A string identifying the name of a module object.}
\item{PackageName}{A string identifying the name of the package the module is
a part of.}
\item{RunFor}{A string identifying whether to run the module for all years
"AllYears", only the base year "BaseYear", or for all years except the base
year "NotBaseYear".}
\item{RunYear}{A string identifying the run year.}
}
\value{
None. The function writes results to the specified locations in the
datastore and prints a message to the console when the module is being run.
}
\description{
\code{runModule} runs a model module.
}
\details{
This function runs a module for a specified year.
}
| 910 | apache-2.0 |
2c05896531094383da54e9c37d8cb7c9c7174902 | johngarvin/R-2.1.1rcc | src/library/base/man/all.Rd | \name{all}
\title{Are All Values True?}
\usage{
all(\dots, na.rm = FALSE)
}
\alias{all}
\description{
Given a set of logical vectors, are all of the values true?
}
\arguments{
\item{\dots}{one or more logical vectors. Other objects are coerced in
a similar way as \code{as.logical.default}.}
\item{na.rm}{logical. If true \code{NA} values are removed before
the result is computed.}
}
\details{
This is a generic function: methods can be defined for it
directly or via the \code{\link{Summary}} group generic.
}
\value{
Given a sequence of logical arguments, a logical value indicating
whether or not all of the elements of \code{x} are \code{TRUE}.
The value returned is \code{TRUE} if all the values in \code{x}
are \code{TRUE}, and \code{FALSE} if any the values in \code{x}
are \code{FALSE}.
If \code{na.rm = FALSE} and \code{x} consists of a mix of \code{TRUE}
and \code{NA} values, the value is \code{NA}.
}
\note{
Prior to \R 2.1.0, only \code{NULL} and logical, integer, numeric
and complex vectors were accepted.
}
\references{
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988)
\emph{The New S Language}.
Wadsworth \& Brooks/Cole.
}
\seealso{
\code{\link{any}}, the \dQuote{complement} of \code{all}, and
\code{\link{stopifnot}(*)} which is an \code{all(*)}
\dQuote{insurance}.
}
\examples{
range(x <- sort(round(rnorm(10) - 1.2, 1)))
if(all(x < 0)) cat("all x values are negative\n")
}
\keyword{logic}
| 1,469 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | Mouseomics/R | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | WelkinGuan/r-source | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
fa60efb119cf2baed37cd47f94f6158887359d45 | lazappi/RNAtools | man/listMDS.Rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/pca_mds.R
\name{listMDS}
\alias{listMDS}
\title{List MDS}
\usage{
listMDS(data.list, top = nrow(data.list[[1]]),
groups = colnames(data.list[[1]]), selection = "pairwise")
}
\arguments{
\item{data.list}{List of matrices to plot}
\item{top}{Number of rows with highes variance to select for plotting}
\item{groups}{Vector of groups assigned to sample columns}
\item{selection}{Select rows in a "pariwise" manner between samples or
"common" across all samples}
}
\value{
List of ggplot2 objects containing MDS plots
}
\description{
Produce MDS plots from a list of matrices
}
| 683 | bsd-2-clause |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | aviralg/R-dyntrace | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | andy-thomason/r-source | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | LeifAndersen/R | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
9230e65c955aec31af8dce0b69f7ae4ad189ffdb | swarm-lab/Rvision | man/plotOF.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/opticalFlow.R
\name{plotOF}
\alias{plotOF}
\title{Plot Optical Flow Arrays}
\usage{
plotOF(
of,
gridsize = c(25, 25),
thresh = 0,
add = TRUE,
arrow.ex = 0.05,
xpd = TRUE,
...
)
}
\arguments{
\item{of}{An \code{\link{Image}} produced by the \code{\link{farneback}}
function.}
\item{gridsize}{A 2-element vector indicating the number of optical flow
vectors to plot in each x-y dimension (default: c(25, 25)). Alternatively, a
numeric value that will be used for both dimensions.}
\item{thresh}{The minimal length of optical flow vectors that should be
plotted (default: 0).}
\item{add}{A logical indicating whether to plot the vector field over an
existing plot (default: FALSE).}
\item{arrow.ex}{Controls the length of the arrows. The length is in terms of
the fraction of the shorter axis in the plot. So with a default of .05, 20
arrows of maximum length can line up end to end along the shorter axis.}
\item{xpd}{If true does not clip arrows to fit inside the plot region,
default is not to clip.}
\item{...}{Graphics arguments passed to the \code{\link{arrows}} function that
can change the color or arrow sizes. See \code{\link{arrows}} help for details.}
}
\description{
Plotting method for \code{\link{Image}} objects produced by the
\code{\link{farneback}} function.
}
\examples{
balloon <- video(system.file("sample_vid/Balloon.mp4", package = "Rvision"))
balloon1 <- readFrame(balloon, 1)
balloon2 <- readFrame(balloon, 2)
changeColorSpace(balloon1, "GRAY", "self")
changeColorSpace(balloon2, "GRAY", "self")
of <- farneback(balloon1, balloon2)
plot(of)
plotOF(of, length = 0.05)
}
\seealso{
\code{\link{farneback}}, \code{\link{arrows}}
}
\author{
Simon Garnier, \email{[email protected]}
}
| 1,801 | gpl-3.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | kmillar/cxxr | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | wch/r-source | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
dd31c3035360bc5e9db479b55c320e22bbf70cfa | intermine/rintermine | man/PL_DiabetesGenes.Rd | \name{PL_DiabetesGenes}
\alias{PL_DiabetesGenes}
\docType{data}
\title{
PL_DiabetesGenes data
}
\description{
A dataset containing identifiers of genes associated with all forms of Diabetes according to OMIM \url{http://www.omim.org/}.
}
\usage{data("PL_DiabetesGenes")}
\format{
A data frame with 68 observations on the following 6 variables:
\describe{
\item{Gene.symbol}{Gene symbol}
\item{Gene.name}{Gene whole name}
\item{Gene.primaryIdentifier}{InterMine Gene.primaryIdentifier (ENTREZ identifier)}
\item{Gene.secondaryIdentifier}{InterMine Gene.secondaryIdentifier (ENSEMBLE identifier)}
\item{Gene.length}{Gene length in base pairs}
\item{Gene.organism.name}{Gene organism name}
}
}
\source{
\url{http://www.humanmine.org/humanmine/bag.do?subtab=view}
}
\examples{
data(PL_DiabetesGenes)
}
\keyword{datasets}
| 899 | lgpl-3.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | reactorlabs/gnur | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | krlmlr/cxxr | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | mathematicalcoffee/r-source | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | nathan-russell/r-source | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | jimhester/r-source | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | rho-devel/rho | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | ArunChauhan/cxxr | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | allr/r-instrumented | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | allr/timeR | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |
dd31c3035360bc5e9db479b55c320e22bbf70cfa | kostaskyritsis/InterMineR | man/PL_DiabetesGenes.Rd | \name{PL_DiabetesGenes}
\alias{PL_DiabetesGenes}
\docType{data}
\title{
PL_DiabetesGenes data
}
\description{
A dataset containing identifiers of genes associated with all forms of Diabetes according to OMIM \url{http://www.omim.org/}.
}
\usage{data("PL_DiabetesGenes")}
\format{
A data frame with 68 observations on the following 6 variables:
\describe{
\item{Gene.symbol}{Gene symbol}
\item{Gene.name}{Gene whole name}
\item{Gene.primaryIdentifier}{InterMine Gene.primaryIdentifier (ENTREZ identifier)}
\item{Gene.secondaryIdentifier}{InterMine Gene.secondaryIdentifier (ENSEMBLE identifier)}
\item{Gene.length}{Gene length in base pairs}
\item{Gene.organism.name}{Gene organism name}
}
}
\source{
\url{http://www.humanmine.org/humanmine/bag.do?subtab=view}
}
\examples{
data(PL_DiabetesGenes)
}
\keyword{datasets}
| 899 | lgpl-2.1 |
556e92a2b812c54d701179bdf96d6631449c8ae4 | okamumu/Rsrat | man/lgumbel.min.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgumbel.R
\name{lgumbel.min}
\alias{lgumbel.min}
\alias{dlgumbel.min}
\alias{plgumbel.min}
\alias{qlgumbel.min}
\alias{rlgumbel.min}
\title{log-Gumbel distribution (Weibull)}
\usage{
dlgumbel.min(x, loclog = 0, scalelog = 1, log = FALSE)
plgumbel.min(q, loclog = 0, scalelog = 1, lower.tail = TRUE,
log.p = FALSE)
qlgumbel.min(p, loclog = 0, scalelog = 1, lower.tail = TRUE,
log.p = FALSE)
rlgumbel.min(n, loclog = 0, scalelog = 1)
}
\arguments{
\item{x, q}{A numeric vector of quantiles.}
\item{loclog}{A numeric value of location parameter for Gumbel distribution.}
\item{scalelog}{A numeric value of scale parameter for Gumbel distribution.}
\item{log, log.p}{A logical; if TRUE, the probability p is given as log(p).}
\item{lower.tail}{A logical; if TRUE, the probability is P[X <= x], otherwise, P[X > x].}
\item{p}{A numeric vector of probabilities.}
}
\value{
'dlgumble.min' gives the desity, 'plgumbel.min' gives the distribution, and
'qlgumbel.min' gives the quantile function.
}
\description{
Density, distribution function, and quantile function for
log-Gumbel (Weibull) distribution.
}
\details{
The log-Gumbel distribution (minimum) has the cumulative distribution function
F(q) = 1-exp(-exp(-z)) where z = (-log(q)-loc)/scale
}
| 1,332 | mit |
9f8054495e188bffee39a66f205b2d1020092d8f | syberia/tundra | man/tundra.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/package.tundra.R
\docType{package}
\name{tundra}
\alias{tundra}
\alias{tundra-package}
\title{Tundra is a standardized classifier container format for R.}
\description{
Deploying models in production systems is generally a cumbersome process.
If analysis is performed in a language like R or SAS, the coefficients of the
model are usually extracted and translated to a "production-ready" language like
R or Java.
}
\details{
However, this approach is flawed. The translation process is time consuming
and error-prone. R is demonstrably capable of serving models
in production environments as long as submillisecond latency is not a
requirement. This means it should be possible to push analysis performed in
R to directly score records in production systems without an intermediary.
This significantly decreases the cost of iterating on machine learning
models.
A tundraContainer is a simple bundling of the two critical components of
any machine learning model.
\itemize{
\item{The data preparation required to convert raw production data to
a record that is acceptable to a trained classifier. For example,
a regression-based model may need discretization of non-categorical
variables or imputation of missing values.}
\item{The trained classifier, usually a native R S3 object with
a \code{train} method.}
}
The former is provided by the \href{https://github.com/syberia/mungebits2}{mungebits2}
package, while the latter is fully customizable to any R function. This
approach allows arbitrary data preparation and statistical methods, unlike
attempts such as PMML (Predictive Modeling Markup Language) which constrain
the space of possible data preparation methodologies and statistical
methodologies to a very limited subset.
}
| 1,834 | mit |
16901a5d2f96567b7a758ac7bda113ddcaf2c72b | MouseGenomics/R | src/library/base/man/qraux.Rd | % File src/library/base/man/qraux.Rd
% Part of the R package, https://www.R-project.org
% Copyright 1995-2015 R Core Team
% Copyright 2002-2015 The R Foundation
% Distributed under GPL 2 or later
\name{QR.Auxiliaries}
\title{Reconstruct the Q, R, or X Matrices from a QR Object}
\usage{
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
}
\alias{qr.X}
\alias{qr.Q}
\alias{qr.R}
\arguments{
\item{qr}{object representing a QR decomposition. This will
typically have come from a previous call to \code{\link{qr}} or
\code{\link{lsfit}}.}
\item{complete}{logical expression of length 1. Indicates whether an
arbitrary orthogonal completion of the \eqn{\bold{Q}} or
\eqn{\bold{X}} matrices is to be made, or whether the \eqn{\bold{R}}
matrix is to be completed by binding zero-value rows beneath the
square upper triangle.}
\item{ncol}{integer in the range \code{1:nrow(qr$qr)}. The number
of columns to be in the reconstructed \eqn{\bold{X}}. The default
when \code{complete} is \code{FALSE} is the first
\code{min(ncol(X), nrow(X))} columns of the original \eqn{\bold{X}}
from which the qr object was constructed. The default when
\code{complete} is \code{TRUE} is a square matrix with the original
\eqn{\bold{X}} in the first \code{ncol(X)} columns and an arbitrary
orthogonal completion (unitary completion in the complex case) in
the remaining columns.}
\item{Dvec}{vector (not matrix) of diagonal values. Each column of
the returned \eqn{\bold{Q}} will be multiplied by the corresponding
diagonal value. Defaults to all \code{1}s.}
}
\description{
Returns the original matrix from which the object was constructed or
the components of the decomposition.
}
\value{
\code{qr.X} returns \eqn{\bold{X}}, the original matrix from
which the qr object was constructed, provided \code{ncol(X) <= nrow(X)}.
If \code{complete} is \code{TRUE} or the argument \code{ncol} is greater than
\code{ncol(X)}, additional columns from an arbitrary orthogonal
(unitary) completion of \code{X} are returned.
\code{qr.Q} returns part or all of \bold{Q}, the order-nrow(X)
orthogonal (unitary) transformation represented by \code{qr}. If
\code{complete} is \code{TRUE}, \bold{Q} has \code{nrow(X)} columns.
If \code{complete} is \code{FALSE}, \bold{Q} has \code{ncol(X)}
columns. When \code{Dvec} is specified, each column of \bold{Q} is
multiplied by the corresponding value in \code{Dvec}.
Note that \code{qr.Q(qr, *)} is a special case of
\code{\link{qr.qy}(qr, y)} (with a \dQuote{diagonal} \code{y}), and
\code{qr.X(qr, *)} is basically \code{\link{qr.qy}(qr, R)} (apart from
pivoting and \code{dimnames} setting).
\code{qr.R} returns \bold{R}. This may be pivoted, e.g., if
\code{a <- qr(x)} then \code{x[, a$pivot]} = \bold{QR}. The number of
rows of \bold{R} is either \code{nrow(X)} or \code{ncol(X)} (and may
depend on whether \code{complete} is \code{TRUE} or \code{FALSE}).
}
\seealso{
\code{\link{qr}},
\code{\link{qr.qy}}.
}
\examples{
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q \%*\% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q \%*\% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) \%*\% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) \%*\% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) \%*\% qr.R(a)[, pivI])) # TRUE too!
}
\keyword{algebra}
\keyword{array}
| 4,060 | gpl-2.0 |