id
stringlengths
40
40
repo_name
stringlengths
5
110
path
stringlengths
2
233
content
stringlengths
0
1.03M
size
int32
0
60M
license
stringclasses
15 values
89bfcf3044826128da463bebc190b036606f8e19
jennybc/purrr
man/list_modify.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/list-modify.R \name{list_modify} \alias{list_modify} \alias{list_merge} \alias{update_list} \title{Modify a list} \usage{ list_modify(.x, ...) list_merge(.x, ...) } \arguments{ \item{.x}{List to modify.} \item{...}{New values of a list. Use \code{NULL} to remove values. Use a formula to evaluate in the context of the list values. These dots have \link[rlang:dots_list]{splicing semantics}.} } \description{ \code{list_modify()} and \code{list_merge()} recursively combine two lists, matching elements either by name or position. If an sub-element is present in both lists \code{list_modify()} takes the value from \code{y}, and \code{list_merge()} concatenates the values together. \code{update_list()} handles formulas and quosures that can refer to values existing within the input list. Note that this function might be deprecated in the future in favour of a \code{dplyr::mutate()} method for lists. } \examples{ x <- list(x = 1:10, y = 4, z = list(a = 1, b = 2)) str(x) # Update values str(list_modify(x, a = 1)) # Replace values str(list_modify(x, z = 5)) str(list_modify(x, z = list(a = 1:5))) # Remove values str(list_modify(x, z = NULL)) # Combine values str(list_merge(x, x = 11, z = list(a = 2:5, c = 3))) # All these functions take dots with splicing. Use !!! or UQS() to # splice a list of arguments: l <- list(new = 1, y = NULL, z = 5) str(list_modify(x, !!! l)) # In update_list() you can also use quosures and formulas to # compute new values. This function is likely to be deprecated in # the future str(update_list(x, z1 = ~z[[1]])) str(update_list(x, z = rlang::quo(x + y))) }
1,684
gpl-3.0
45a4fa997ab725351641537096881b3e3c7ed69f
kalibera/rexp
src/library/base/man/file.info.Rd
% File src/library/base/man/file.info.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2014 R Core Team % Distributed under GPL 2 or later \name{file.info} \alias{file.info} \alias{file.mode} \alias{file.mtime} \alias{file.size} \title{Extract File Information} \description{ Utility function to extract information about files on the user's file systems. } \usage{ file.info(\dots, extra_cols = TRUE) file.mode(\dots) file.mtime(\dots) file.size(\dots) } \arguments{ \item{\dots}{character vectors containing file paths. Tilde-expansion is done: see \code{\link{path.expand}}.} \item{extra_cols}{Logical: return all cols rather than just the first six.} } \details{ What constitutes a \sQuote{file} is OS-dependent but includes directories. (However, directory names must not include a trailing backslash or slash on Windows.) See also the section in the help for \code{\link{file.exists}} on case-insensitive file systems. The file \sQuote{mode} follows POSIX conventions, giving three octal digits summarizing the permissions for the file owner, the owner's group and for anyone respectively. Each digit is the logical \emph{or} of read (4), write (2) and execute/search (1) permissions. #ifdef unix On most systems symbolic links are followed, so information is given about the file to which the link points rather than about the link. #endif #ifdef windows File modes are probably only useful on NTFS file systems, and it seems all three digits refer to the file's owner. The execute/search bits are set for directories, and for files based on their extensions (e.g., \file{.exe}, \file{.com}, \file{.cmd} and \file{.bat} files). \code{\link{file.access}} will give a more reliable view of read/write access availability to the \R process. UTF-8-encoded file names not valid in the current locale can be used. Junction points and symbolic links are followed, so information is given about the file/directory to which the link points rather than about the link. #endif } \value{ For \code{file.info}, data frame with row names the file names and columns \item{size}{double: File size in bytes.} \item{isdir}{logical: Is the file a directory?} \item{mode}{integer of class \code{"octmode"}. The file permissions, printed in octal, for example \code{644}.} \item{mtime, ctime, atime}{integer of class \code{"POSIXct"}: file modification, \sQuote{last status change} and last access times.} #ifdef unix \item{uid}{integer: the user ID of the file's owner.} \item{gid}{integer: the group ID of the file's group.} \item{uname}{character: \code{uid} interpreted as a user name.} \item{grname}{character: \code{gid} interpreted as a group name.} Unknown user and group names will be \code{NA}. #endif #ifdef windows \item{exe}{character: what sort of executable is this? Possible values are \code{"no"}, \code{"msdos"}, \code{"win16"}, \code{"win32"}, \code{"win64"} and \code{"unknown"}. Note that a file (e.g. a script file) can be executable according to the mode bits but not executable in this sense.} #endif If \code{extra_cols} is false, only the first six columns are returned: as these can all be found from a single C system call this can be faster. Entries for non-existent or non-readable files will be \code{NA}. #ifdef unix The \code{uid}, \code{gid}, \code{uname} and \code{grname} columns may not be supplied on a non-POSIX Unix-alike system, and will not be on Windows. #endif What is meant by the three file times depends on the OS and file system. On Windows native file systems \code{ctime} is the file creation time (something which is not recorded on most Unix-alike file systems). What is meant by \sQuote{file access} and hence the \sQuote{last access time} is system-dependent. The times are reported to an accuracy of seconds, and perhaps more on some systems. However, many file systems only record times in seconds, and some (e.g. modification time on FAT systems) are recorded in increments of 2 or more seconds. \code{file.mode}, \code{file.mtime} and \code{file.mode} are convenience wrappers returning just one of the columns. } #ifdef unix \note{ Some systems allow files of more than 2Gb to be created but not accessed by the \code{stat} system call. Such files will show up as non-readable (and very likely not be readable by any of \R's input functions) -- fortunately such file systems are becoming rare. } #endif \seealso{ \code{\link{Sys.readlink}} to find out about symbolic links, \code{\link{files}}, \code{\link{file.access}}, \code{\link{list.files}}, and \code{\link{DateTimeClasses}} for the date formats. \code{\link{Sys.chmod}} to change permissions. } \examples{ ncol(finf <- file.info(dir())) # at least six \donttest{finf # the whole list} ## Those that are more than 100 days old : finf <- file.info(dir(), extra_cols = FALSE) finf[difftime(Sys.time(), finf[,"mtime"], units = "days") > 100 , 1:4] file.info("no-such-file-exists") } \keyword{file}
5,105
gpl-2.0
55c03f58102e022f55479f21bdc8fd88fe0eb358
nickmckay/LiPD-utilities
R/deprecated/man/merge_csv_columns.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/jsons_merge.R \name{merge_csv_columns} \alias{merge_csv_columns} \title{Merge values into each column} \usage{ merge_csv_columns(csvs, meta) } \arguments{ \item{list}{meta: Table metadata, sorted by column} } \value{ list meta: Table metadata } \description{ Merge values into each column } \keyword{internal}
388
gpl-2.0
07df5702b2a3d966f9766831f6c351885bee4d21
surh/AMOR
man/write.qiime.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/write.qiime.r \name{write.qiime} \alias{write.qiime} \alias{write.qiime.default} \alias{write.qiime.Dataset} \title{Write a QIIME abundance table file} \usage{ write.qiime(x, file) \method{write.qiime}{default}(x, file) \method{write.qiime}{Dataset}(x, file) } \arguments{ \item{x}{Either an abundance matrix or a Dataset} \item{file}{Path to the file to write} } \description{ Writes a file compatible with QIIME } \examples{ data(Rhizo) # The following are equivalent write.qiime(Rhizo,'myfile.txt') write.qiime(create_dataset(Rhizo),'myfile.txt') } \author{ Sur Herrera Paredes }
665
gpl-3.0
5229f33d2785608658fec5bbaec8837588d671df
swarm-lab/Rvision
man/inPlaceComparison.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/comparisons.R, R/zzz.R \name{inPlaceComparison} \alias{inPlaceComparison} \alias{\%i>\%} \alias{\%i<\%} \alias{\%i>=\%} \alias{\%i<=\%} \alias{\%i==\%} \alias{\%i!=\%} \title{In Place Comparison Operators for Images} \usage{ e1 \%i>\% e2 e1 \%i<\% e2 e1 \%i>=\% e2 e1 \%i<=\% e2 e1 \%i==\% e2 e1 \%i!=\% e2 } \arguments{ \item{e1, e2}{Either 2 \code{\link{Image}} objects or 1 \code{\link{Image}} object and 1 numeric value/vector. If a vector and its length is less than the number of channels of the image, then it is recycled to match it.} } \value{ These operators do not return anything. They modify the image in place (destructive operation). If 2 images are passed to the operators, only the one of the left side of the operator is modified; the other is left untouched. } \description{ In Place Comparison Operators for Images } \examples{ balloon1 <- image(system.file("sample_img/balloon1.png", package = "Rvision")) balloon2 <- image(system.file("sample_img/balloon2.png", package = "Rvision")) balloon1 \%i>\% balloon2 } \seealso{ \code{\link{Image}} } \author{ Simon Garnier, \email{[email protected]} }
1,203
gpl-3.0
523a3e1a74edce0f73b9f3025cf660c7644d0a0a
olli0601/rBEAST
man/treeannotator.plot.immu.timeline.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/fun.treeannotator.R \name{treeannotator.plot.immu.timeline} \alias{treeannotator.plot.immu.timeline} \title{No description yet} \usage{ treeannotator.plot.immu.timeline(ph, ph.immu.timeline, immu.min = 150, immu.max = 800, immu.legend = c(200, 350, 500, immu.max), width.yinch = 0.15, add.yinch = -0.005, col.bg = cols[3], col.legend = cols[4], cex.txt = 0.2, lines.lwd = 0.2) } \description{ No description yet } \author{ Oliver Ratmann }
525
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
WelkinGuan/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
rho-devel/rho
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
424e55aef96d8d55e2a347703bb094fb6f99bcdb
wStockhausen/rTCSAM2015
man/plotTCSAM2015I.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plotTCSAM2015I.R \name{plotTCSAM2015I} \alias{plotTCSAM2015I} \title{Plot TCSAM2015 model output} \usage{ plotTCSAM2015I(repObj = NULL, prsObj = NULL, stdObj = NULL, objList = NULL, ggtheme = theme_grey(), showPlot = TRUE, pdf = NULL, width = 14, height = 8, verbose = FALSE) } \arguments{ \item{repObj}{- tcsam2015.rep object based on sourcing a TCSAM2015 model report file. can be NULL.} \item{prsObj}{- tcsam2015.prs object based on reading a TCSAM2015 active parameters csv file. can be NULL.} \item{stdObj}{- tcsam2015.std object based on reading a TCSAM2015 std file. can be NULL.} \item{objList}{- list with optional elements repObj, prsObj, stdObj (an optional way to provide the Obj's)} \item{ggtheme}{- a ggplot2 theme to use with ggplot2 plots} \item{showPlot}{- flag to show plots immediately} \item{pdf}{- filename for pdf output (optional)} \item{width}{- pdf page width (in inches)} \item{height}{- pdf page width (in inches)} \item{verbose}{- flag (T/F) to print diagnostic info} } \value{ multi-level list of ggplot2 objects } \description{ Function to plot data and results from a TCSAM2015 model run. } \details{ none }
1,231
mit
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
cxxr-devel/cxxr
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
jeroenooms/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
SurajGupta/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
d67c7891a58753ffc245ab11ce18b46849069e5f
janezhango/BigDataMachineLearning
R/h2o-package/man/h2o.exportFile.Rd
\name{h2o.exportFile} \alias{h2o.exportFile} \title{ Export H2O Data Frame to a File. } \description{ Export an H2O Data Frame (which can be either VA or FV) to a file. This file may be on the H2O instance's local filesystem, or to HDFS (preface the path with hdfs://) or to S3N (preface the path with s3n://). } \usage{ ## Default method: h2o.exportFile(data, path, force = FALSE) } \arguments{ \item{data}{ An \code{\linkS4class{H2OParsedData}} or \code{\linkS4class{H2OParsedDataVA}} data frame. } \item{path}{ The path to write the file to. Must include the directory and filename. May be prefaced with hdfs:// or s3n://. Each row of data appears as one line of the file. } \item{force}{ (Optional) If \code{force = TRUE} any existing file will be overwritten. Otherwise if the file already exists the operation will fail. } } \value{ None. (The function will stop if it fails.) } \examples{ \dontrun{ library(h2o) localH2O = h2o.init(ip = "localhost", port = 54321, startH2O = TRUE) irisPath = system.file("extdata", "iris.csv", package = "h2o") iris.hex = h2o.importFile(localH2O, path = irisPath) h2o.exportFile(iris.hex, path = "/path/on/h2o/server/filesystem/iris.csv") h2o.exportFile(iris.hex, path = "hdfs://path/in/hdfs/iris.csv") h2o.exportFile(iris.hex, path = "s3n://path/in/s3/iris.csv") } }
1,329
apache-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
Mouseomics/R
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
andy-thomason/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
kmillar/cxxr
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
RevolutionAnalytics/RRO
R-src/src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
LeifAndersen/R
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
krlmlr/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
allr/timeR
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
2c62e026fa4ceb7ea96890d725352016b5b36825
petercraigmile/dwt
dwt/man/plot.dwt.Rd
\name{plot.dwt} \alias{plot.dwt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Display the DWT decomposition graphically } \description{ %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ plot.dwt(x, xlab = "time", col = "black", bg = "white", bdcol = "black", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ %% ~~Describe \code{x} here~~ } \item{xlab}{ %% ~~Describe \code{xlab} here~~ } \item{col}{ %% ~~Describe \code{col} here~~ } \item{bg}{ %% ~~Describe \code{bg} here~~ } \item{bdcol}{ %% ~~Describe \code{bdcol} here~~ } \item{\dots}{ %% ~~Describe \code{\dots} here~~ } } \details{ %% ~~ If necessary, more details than the description above ~~ } \value{ %% ~Describe the value returned %% If it is a LIST, use %% \item{comp1 }{Description of 'comp1'} %% \item{comp2 }{Description of 'comp2'} %% ... } \references{ %% ~put references to the literature/web site here ~ } \author{ %% ~~who you are~~ } \note{ %% ~~further notes~~ } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ %% ~~objects to See Also as \code{\link{help}}, ~~~ } \examples{ } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. %\keyword{ ~kwd1 } %\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
1,385
gpl-3.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
minux/R
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
jimhester/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
7f73cdebe2722c61303378df02bee0e65299af24
fcampelo/Lattes-XML-to-HTML
man/print_phd_theses.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/print_phd_theses.R \name{print_phd_theses} \alias{print_phd_theses} \title{Print PhD theses} \usage{ print_phd_theses(x, language = c("EN", "PT")) } \arguments{ \item{x}{data frame containing information on published phd theses} \item{language}{Language to use in section headers} } \description{ Prints PhD theses defended }
405
gpl-3.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
ArunChauhan/cxxr
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
mathematicalcoffee/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
krlmlr/cxxr
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
MouseGenomics/R
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
418ccaab0a332a1666c0d46d57c23e07725c76fd
mcdelaney/fbRads
man/fb_insights.Rd
% Generated by roxygen2 (4.1.1): do not edit by hand % Please edit documentation in R/fb_insights.R \name{fb_insights} \alias{fb_insights} \title{Insights} \usage{ fb_insights(fbacc, target = fbacc$acct_path, job_type = c("sync", "async"), ...) } \arguments{ \item{fbacc}{(optional) \code{FB_Ad_account} object, which defaults to the last returned object of \code{\link{fbad_init}}.} \item{target}{ad account id (default), campaign id, adset id or ad id} \item{job_type}{synchronous or asynchronous request. If the prior fails with "please reduce the amount of data", it will fall back to async request.} \item{...}{named arguments passed to the API, like time range, fields, filtering etc.} } \value{ list } \description{ Insights } \examples{ \dontrun{ fb_insights(fbacc) ## process results l <- fb_insights(fbacc, date_preset = 'today', level = 'adgroup') library(rlist) list.stack(list.select(l, date_start, date_stop, adgroup_id, total_actions, total_unique_actions, total_action_value, impressions, unique_impressions, social_impressions, unique_social_impressions, clicks, unique_clicks, social_clicks, unique_social_clicks, spend, frequency, deeplink_clicks, app_store_clicks, website_clicks, reach, social_reach, ctr, unique_ctr, cpc, cpm, cpp, cost_per_total_action, cost_per_unique_click, relevance_score = relevance_score$score)) } } \references{ \url{https://developers.facebook.com/docs/marketing-api/insights/v2.3} }
1,439
agpl-3.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
SensePlatform/R
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
kmillar/rho
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
aviralg/R-dyntrace
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
wch/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
abiyug/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
b3b5b43d22db3088317cc51a1c2dc2a1db2c74a5
rsachse/luess
man/matchInputGrid.Rd
\name{matchInputGrid} \alias{matchInputGrid} \title{Identify positions of output grid cells in the input grid} \usage{ matchInputGrid(grid.in, grid.out) } \arguments{ \item{grid.in}{array with 2 dimensions or object of class SpatialGrid providing longitude and lattitude of cell centers of the input grid} \item{grid.out}{array with 2 dimensions or object of class SpatialGrid providing longitude and lattitude of cell centers of the output grid} } \value{ vector of integers giving the positions of the output grid cells in the input grid } \description{ The function identifies positions of cells of the output grid within the input grid. This is helpful in cases the output grid has a smaller number of cells than the input grid. } \examples{ \dontrun{ mygrid <- generate_grid() pos_in_input <- matchInputGrid(coordinates(mygrid), lpj_short_outgrid) plot(coordinates(mygrid)[pos_in_input,], col="green", pch=".") } } \author{ Rene Sachse \email{[email protected]} } \keyword{CLUMondo} \keyword{LPJ,} \keyword{LPJml,} \keyword{cells,} \keyword{grid} \keyword{grid,} \keyword{positions,}
1,124
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
allr/r-instrumented
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
nathan-russell/r-source
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
15ed80f235ac3d5359dd30afe7fb7ca197e759d0
cran/readGenalex
man/as.loci.genalex.Rd
% Generated by roxygen2 (4.1.1): do not edit by hand % Please edit documentation in R/readGenalex-pegas.R \name{as.loci.genalex} \alias{as.loci.genalex} \title{Convert class \code{'genalex'} object to data frame of class \code{'loci'} from package \code{'pegas'}} \usage{ as.loci.genalex(x, phased = FALSE, check.annotation = TRUE, ...) } \arguments{ \item{x}{Annotated data frame of class \code{'genalex'}} \item{phased}{Still some details to work out. Default \code{FALSE}. If \code{FALSE}, assumes alleles in \code{x} are unphased so that a genotype of \code{101/107} is identical to a genotype of \code{107/101}. This results in the use of \code{"/"} as the allele separator. If \code{TRUE}, uses \code{"|"} as the allele separator, and assumes alleles are phased so that a genotype of \code{101|107} is different from a genotype of \code{107/101}.} \item{check.annotation}{If \code{TRUE}, the annotations for the dataset are checked using \code{is.genalex(x, force = TRUE, skip.strings = TRUE)} prior to conversion. If that returns \code{FALSE}, nothing is converted and an error is generated.} \item{\dots}{Additional arguments, currently ignored} } \value{ \code{x} as an object of class \code{'loci'}, which is a data frame with the genotype of each locus encoded as factors. Additional changes apply, see Details. } \description{ Converts an object of class \code{'genalex'} to a data frame of class \code{'loci'} from the \href{http://cran.r-project.org/web/packages/pegas/index.html}{pegas} package. This data frame is similar to one of class \code{'genalex'}, in that it mixes genetic and other data in the same data frame, but its conversion of multiple allele columns to single genotype columns is similar to the result of the \code{\link{as.genetics}} function of this package. } \details{ Like class \code{'genalex'}, class \code{'loci'} can encode genotypes of any ploidy. Once a class \code{'genalex'} object is converted to class \code{'loci'}, it may be further converted to other data structures for analysis with \href{http://cran.r-project.org/web/packages/pegas/index.html}{pegas} and \href{http://cran.r-project.org/web/packages/adegenet/index.html}{adegenet}. The specific changes that occur to an object of class \code{'genalex'} for it to become an object of class \code{'loci'}: \itemize{ \item Row names are set from sample names \item The first column of sample names is removed \item The population column is renamed \code{"population"}, with its original name retained in the \code{"pop.title"} attribute. It is also encoded as a factor. \item The individual alleles of a locus are merged into a single column, with alleles separated by \code{"/"} (\code{phased = FALSE}) or \code{"|"} (\code{phased = TRUE}). These columns are encoded as factors. \item The \code{"locus.columns"} attribute is updated to reflect that all alleles at a locus are now joined into a single column \item A new attribute \code{"locicol"} required by class \code{'loci'} is added, with a value identical to the \code{"locus.columns"} attribute \item The \code{class} is changed from \code{c('genalex', 'data.frame')} to \code{c('loci', 'data.frame')} } Because of the removal of the sample name column and the additional allele columns, the number of columns will be reduced by 1 plus the number of loci. For further details of the structure of class \code{\link[pegas]{loci}}, see \url{http://ape-package.ird.fr/pegas/DefinitionDataClassesPegas.pdf}. Because class \code{'loci'} can readily encode additional columns, the extra columns of a class \code{'genalex'} object can be bound with \code{cbind} as additional columns. This is a specialised wrapper around the function \code{\link[pegas]{as.loci.data.frame}} from the \href{http://cran.r-project.org/web/packages/pegas/index.html}{pegas} package. } \examples{ suppressPackageStartupMessages(require(pegas)) data(Qagr_pericarp_genotypes) dd <- as.genalex(head(Qagr_pericarp_genotypes, 15), force = TRUE) as.loci(dd) str(as.loci(dd, phased = TRUE)) } \author{ Douglas G. Scofield } \seealso{ \code{\link[pegas]{as.loci}}, \code{\link{joinGenotypes}} }
4,235
lgpl-3.0
e8c47411697f2a6559c3d0eab90fa5e89bae96d3
reactorlabs/gnur
src/library/stats/man/ts-methods.Rd
% File src/library/stats/man/ts-methods.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{ts-methods} \alias{diff.ts} \alias{na.omit.ts} \title{Methods for Time Series Objects} \description{ Methods for objects of class \code{"ts"}, typically the result of \code{\link{ts}}. } \usage{ \method{diff}{ts}(x, lag = 1, differences = 1, \dots) \method{na.omit}{ts}(object, \dots) } \arguments{ \item{x}{an object of class \code{"ts"} containing the values to be differenced.} \item{lag}{an integer indicating which lag to use.} \item{differences}{an integer indicating the order of the difference.} \item{object}{a univariate or multivariate time series.} \item{\dots}{further arguments to be passed to or from methods.} } \details{ The \code{na.omit} method omits initial and final segments with missing values in one or more of the series. \sQuote{Internal} missing values will lead to failure. } \value{ For the \code{na.omit} method, a time series without missing values. The class of \code{object} will be preserved. } \seealso{ \code{\link{diff}}; \code{\link{na.omit}}, \code{\link{na.fail}}, \code{\link{na.contiguous}}. } \keyword{ts}
1,255
gpl-2.0
313ca166e122ef1c41b51a89f9d1b8a555c55131
ARCCSS-extremes/climpact2
pcic_packages/climdex.pcic.ncdf/man/get.climdex.variable.metadata.Rd
% Generated by roxygen2 (4.0.2): do not edit by hand \name{get.climdex.variable.metadata} \alias{get.climdex.variable.metadata} \title{Returns metadata for specified Climdex variables} \usage{ get.climdex.variable.metadata(vars.list, template.filename) } \arguments{ \item{vars.list}{The list of variables, as returned by \code{\link{get.climdex.variable.list}}.} \item{template.filename}{The filename template to be used when generating filenames.} } \value{ A data frame containing the following: \itemize{ \item{long.name}{Long names for the variable} \item{var.name}{Variable name for use in the file} \item{units}{Units for the variable} \item{annual}{Whether the variable is annual} \item{base.period.attr}{Whether to include a base period attribute} \item{standard.name}{Standard name to use for the variable} \item{filename}{Filename to be written out} } } \description{ Returns metadata for specified Climdex variables. } \details{ This function returns metadata suitable for use in NetCDF files for the specified variables. } \examples{ ## Get metadata (including filenames) for specified variables. fn <- "pr_day_BCCAQ+ANUSPLIN300+MRI-CGCM3_historical+rcp85_r1i1p1_19500101-21001231.nc" var.list2 <- get.climdex.variable.list("prec", time.resolution="annual") md <- get.climdex.variable.metadata(var.list2, fn) }
1,326
gpl-3.0
560945de8b7f9135f06b5142f5720a58cdbbd897
karchjd/gppmr
man/coef.GPPM.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/extractors.R \name{coef.GPPM} \alias{coef.GPPM} \title{Point Estimates} \usage{ \method{coef}{GPPM}(object, ...) } \arguments{ \item{object}{object of class GPPM. Must be fitted, that is, a result from \code{\link{fit.GPPM}}.} \item{...}{additional arguments (currently not used).} } \value{ Point estimates for all parameters as a named numeric vector. } \description{ Extracts point estimates for all parameters from a fitted GPPM. } \examples{ \donttest{ data("demoLGCM") lgcm <- gppm('muI+muS*t','varI+covIS*(t+t#)+varS*t*t#+(t==t#)*sigma', demoLGCM,'ID','y') lgcmFit <- fit(lgcm) paraEsts <- coef(lgcmFit) } } \seealso{ Other functions to extract from a GPPM: \code{\link{SE}}, \code{\link{confint.GPPM}}, \code{\link{covf}}, \code{\link{datas}}, \code{\link{fitted.GPPM}}, \code{\link{getIntern}}, \code{\link{logLik.GPPM}}, \code{\link{maxnObs}}, \code{\link{meanf}}, \code{\link{nObs}}, \code{\link{nPars}}, \code{\link{nPers}}, \code{\link{nPreds}}, \code{\link{parEsts}}, \code{\link{pars}}, \code{\link{preds}}, \code{\link{vcov.GPPM}} }
1,153
gpl-3.0
92bd6705d0abc8590a3abdd067de32fa4d418d5d
DanielKneipp/DNAr
man/get_dsd_buff_str.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/dsd.R \name{get_dsd_buff_str} \alias{get_dsd_buff_str} \title{Instantiate a buffer module in the DSD script} \usage{ get_dsd_buff_str(qs, qmax, Cmax, Cii, d, domains) } \arguments{ \item{qs}{String representing a variable name for the \code{qs} parameter of the module signature.} \item{qmax}{String representing a variable name for the \code{qmax} parameter of the module signature.} \item{Cmax}{String representing a variable name for the \code{Cmax} parameter of the module signature.} \item{Cii}{String representing a variable name for the \code{Cii} parameter of the module signature.} \item{d}{String representing a variable name for the \code{d} parameter of the module signature.} \item{domains}{Vector of strings representing the domains of the input species.} } \value{ A string representing the instantiation of a \code{Buff()} module. } \description{ This function returns a string representing an addition of a buffer reaction module in the DSD script. It creates a \code{Buff()} module in the script, replacing all the parameter strings by the ones specified in this function. }
1,176
agpl-3.0
6b1d0fc8e030a82ca033459534f1d9ea399d8f07
predictive-technology-laboratory/sensus
SensusR/RProject/man/plot.BatteryDatum.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/SensusR.R \name{plot.BatteryDatum} \alias{plot.BatteryDatum} \title{Plot battery data.} \usage{ \method{plot}{BatteryDatum}(x, pch = ".", type = "l", main = "Battery", ...) } \arguments{ \item{x}{Battery data.} \item{pch}{Plotting character.} \item{type}{Line type.} \item{main}{Main title.} \item{...}{Other plotting parameters.} } \value{ None } \description{ Plot battery data. }
467
apache-2.0
2302bd708914df821c0fa9497393d65fe6ad73e3
letiR/letiRmisc
man/meanAlong.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/meanAlong.R \name{meanAlong} \alias{meanAlong} \title{Compute the mean along a vector} \usage{ meanAlong(vec, n) } \arguments{ \item{vec}{a vector of numeric.} \item{n}{an integer indicating the size of the window.} } \description{ This function is a simple moving window function. } \examples{ meanAlong(1:10, 2) }
395
gpl-3.0
066283e8b31946018ef78e0a7109f852327b0171
BiGCAT-UM/RRegrs
RRegrs/man/RFRFEreg.Rd
\name{RFRFEreg} \alias{RFRFEreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Recursive Feature Elimination - Random Forest Models. } \description{ RFRFEreg fits RFE RF regression models and returns resampling based performance measure using the train function by caret package. } \usage{ RFRFEreg(my.datf.train,my.datf.test,sCV,iSplit=1, fDet=F,outFile="") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{my.datf.train}{the training data set; an object where samples are in rows and features are in columns. The first column should be a numeric or factor vector containing the outcome for each sample.} \item{my.datf.test}{the test data set.} \item{sCV}{A string or a character vector specifying which resampling method to use. See details below.} \item{iSplit}{a number indicating from which splitting of the data, the above train and test sets are derived. The default value is 1.} \item{fDet}{A logical value for saving model statistics; the default value is FALSE. See below for details.} \item{outFile}{A string specifying the output file (could include path) for details (fDet=TRUE).} } \details{ RMSE is the summary metric used to select the optimal model. RFE Random Forest uses the RFE function of caret and combined with the rffunctions and randomForest modeling to obtaing the best model with the best feature set. To control the computational nuances of the train function, trainControl is used; number of folds or resampling iterations is set to 10, and the number of completed set of folds is set to 10 (for repeated k-fold cross-validation). sCV can take the following values: boot, boot632, cv, repeatedcv, LOOCV, LGOCV (for repeated training/test splits), none (only fits one model to the entire training set), oob (only for random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models), "adaptive_cv", "adaptive_boot" or "adaptive_LGOCV". If fDet=TRUE, the following output is produced: a CSV file with detailed statistics about the regression model (Regression method, splitting number, cross-validation type, Training set summary, Test set summary, Fitting summary, List of predictors, Training predictors, Test predictors, resampling statistics, features importance, residuals of the fitted model, assessment of applicability domain (leverage analysis, Cook`s distances, points influence)), 5-12 plots for fitting statistics as a PDF file for each splitting and cross-validation method (Training Yobs-Ypred, Test Yobs-Ypred, Feature Importance, Fitted vs. Residuals for Fitted Model, Leverage for Fitted Model, Cook`s Distance for Fitted Model, 6 standard fitting plots using plot function with cutoff.Cook). } \value{ A list is returned containing: \item{stat.values}{model`s statistics} \item{model}{the full rferf model, i.e. a list of class train} } \examples{ \dontrun{ fDet <- FALSE iSeed <- i # the fraction of training set from the entire dataset; trainFrac <- 0.75 # dataset folder for input and output files PathDataSet <- 'DataResults' # upload data set ds <- read.csv(ds.Housing,header=T) # split the data into training and test sets dsList <- DsSplit(ds,trainFrac,fDet,PathDataSet,iSeed) ds.train<- dsList$train ds.test <- dsList$test # types of cross-validation methods CVtypes <- c('repeatedcv','LOOCV') outLM<- 'RFRFEoutput.csv' RFRFE.fit <- RFRFEreg(ds.train,ds.test,CVtypes[1],iSplit=1,fDet=F,outFile=outLM) } } \author{ Jose A. Seoane, Carlos Fernandez-Lozano }
4,519
bsd-2-clause
9d4829d8a171238ff83b22a636bd614581aa1c27
jpshanno/Ecohydro
man/time_seq.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/Helper_Functions.R \name{time_seq} \alias{time_seq} \title{Create time sequence with custom intervals} \usage{ time_seq(start, end, step, units = "days") } \arguments{ \item{start}{a start date or time as a character string ("yyyy-mm-dd", "yyyy-mm-dd HH:MM", "yyyy-mm-dd HH:MM:SS)} \item{end}{a start date or time as a character string ("yyyy-mm-dd", "yyyy-mm-dd HH:MM", "yyyy-mm-dd HH:MM:SS)} \item{step}{a number representing the length of time in units between each step} \item{units}{the units of step: "seconds", "minutes", "days", "months", "years"} } \value{ A POSIXt vector } \description{ Creates a POSIXt vector from the start time to the end time with N number of steps, based on the interval length choosen. } \examples{ time_seq("2016-11-01", "2016-11-03", step = 15, units = "minutes") }
883
gpl-2.0
b3c9e883ed06880da9a356965c3285affc158699
cran/zipfR
man/productivity_measures.Rd
\name{productivity.measures} \alias{productivity.measures} \alias{productivity.measures.tfl} \alias{productivity.measures.spc} \alias{productivity.measures.vgc} \alias{productivity.measures.default} \title{Measures of Productivity and Lexical Richness (zipfR)} \encoding{UTF-8} \description{ Compute various measures of productivity and lexical richness from an observed frequency spectrum, or type-frequency list, from an observed vocabulary growth curve or from a vector of tokens. } \usage{ productivity.measures(obj, measures, ...) \method{productivity.measures}{tfl}(obj, measures, ...) \method{productivity.measures}{spc}(obj, measures, ...) \method{productivity.measures}{vgc}(obj, measures, ...) \method{productivity.measures}{default}(obj, measures, ...) } \arguments{ \item{obj}{a suitable data object from which productivity measures can be computed. Currently either a frequency spectrum (of class \code{spc}), a type-frequency list (of class \code{tfl}), a vocabulary growth curve (of class \code{vgc}), or a token vector.} \item{measures}{character vector naming the productivity measures to be computed (see "Productivity Measures" below). Names may be abbreviated as long as they remain unique. If unspecified, all supported measures are computed.} \item{...}{additional arguments passed on to the method implementations (currently, no further arguments are recognized)} } \value{ If \code{obj} is a frequency spectrum, type-frequency list or token vector: A numeric vector of the same length as \code{measures} with the corresponding observed values of the productivity measures. If \code{obj} is a vocabulary growth curves: A numeric matrix with columns corresponding to the selected productivity measures and rows corresponding to the sample sizes of the vocabulary growth curve. } \details{ This function computes productivity measures based on an observed frequency spectrum, type-frequency list or vocabulary growth curve. If an \emph{expected} spectrum or VGC is passed, the expectations \eqn{E[V]}, \eqn{E[V_m]} will simply be substituted for the sample values \eqn{V}, \eqn{V_m} in the equations. In most cases, this does \emph{not} yield the expected value of the productivity measure! Some measures can only be computed from a complete frequency spectrum. They will return \code{NA} if \code{obj} is an incomplete spectrum or type-frequency list, an expected spectrum or a vocabulary growth curve is passed. Some other measures can only be computed is a sufficient number of spectrum elements is included in a vocabulary growth curve (usually at least \eqn{V_1} and \eqn{V_2}), and will return \code{NA} otherwise. Such limitations are indicated in the list of measures below (unless spectrum elements \eqn{V_1} and \eqn{V_2} are sufficient). For an expected frequency spectrum or vocabulary growth curve, accuracte expectations can be computed for the measures \eqn{R}, \eqn{C}, \eqn{P}, TTR and \eqn{V}. For \eqn{S}, \eqn{H} and Hapaxes, the expecations are often reasonably good approximations (based on a normal approximation of the ratio \eqn{V_m / V} derived from Evert (2004b, Lemma A.8) using an (incorrect) independence assumption for \eqn{V_m} and \eqn{V - V_m}). } \section{Productivity Measures}{ The following productivity measures are currently supported: \describe{ \item{\code{K}:}{ Yule's (1944) \eqn{K = 10000 \cdot \frac{ \sum_m m^2 V_m - N}{ N^2 }}{K = 10000 * (SUM(m) m^2 Vm - N) / N^2} \cr (only for complete observed frequency spectrum) } \item{\code{D}:}{ Simpson's (1949) \eqn{D = \sum_m V_m \frac{m}{N}\cdot \frac{m-1}{N-1}}{D = SUM(m) Vm * (m / N) * ((m - 1) / (N - 1))} \cr (only for complete observed frequency spectrum) } \item{\code{R}:}{ Guiraud's (1954) \eqn{R = V / \sqrt{N}} } \item{\code{S}:}{ Sichel's (1975) \eqn{S = V_2 / V}{S = V2 / V}, i.e. the proportion of \emph{dis legomena} } \item{\code{H}:}{ Honoré's (1979) \eqn{H = 100 \frac{ \log N }{ 1 - V_1 / V }}{H = 100 * log(N) / (1 - V1 / V)}, a transformation of the proportion of \emph{hapax legomena} adjusted for sample size } \item{\code{C}:}{ Herdan's (1964) \eqn{C = \frac{ \log V }{ \log N }}{C = log(V) / log(N)} } \item{\code{P}:}{ Baayen's (1991) productivity index \eqn{P = \frac{V_1}{N}}{P = V1 / N}, which corresponds to the slope of the vocabulary growth curve (under random sampling assumptions) } \item{\code{TTR}:}{ the type-token ratio TTR = \eqn{V / N} } \item{\code{Hapax}:}{ the proportion of \emph{hapax legomena} \eqn{\frac{V_1}{V}}{V1 / V} } \item{\code{V}:}{ the total number of types \eqn{V} } %% \item{\code{}:}{} } } \references{ Evert, Stefan (2004b). \emph{The Statistics of Word Cooccurrences: Word Pairs and Collocations.} PhD Thesis, IMS, University of Stuttgart. URN urn:nbn:de:bsz:93-opus-23714 \url{http://elib.uni-stuttgart.de/opus/volltexte/2005/2371/} } \seealso{ \code{\link{lnre.bootstrap}} and \code{\link{bootstrap.confint}} for parametric bootstrapping experiments, which help to determine the true expectations and sampling distributions of all productivity measures. } \keyword{ methods } \keyword{ univar } \examples{ ## TODO }
5,389
gpl-3.0
5e653dd1250afe71b84b0921bf9e8c27c13144ff
jcfisher/latentnetDiffusion
man/tidyDegrootList.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/tidy_degroot_list.R \name{tidyDegrootList} \alias{tidyDegrootList} \title{Wrapper function for tidyDegroot that calls tidyDegroot on a list of degroot matrices} \usage{ tidyDegrootList(pred, id) } \arguments{ \item{pred:}{a list of matrices, generally produced by ergmmDegroot} \item{id:}{a vector of ID values that will be added as a column to the resulting data.frame} } \value{ a data.frame (in tibble form) with 4 columns: the 3 columns produced by tidyDegroot, plus an additional column for the number of draws, which denotes the index from the original list } \description{ Wrapper function for tidyDegroot that calls tidyDegroot on a list of degroot matrices }
772
gpl-3.0
066283e8b31946018ef78e0a7109f852327b0171
enanomapper/RRegrs
RRegrs/man/RFRFEreg.Rd
\name{RFRFEreg} \alias{RFRFEreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Recursive Feature Elimination - Random Forest Models. } \description{ RFRFEreg fits RFE RF regression models and returns resampling based performance measure using the train function by caret package. } \usage{ RFRFEreg(my.datf.train,my.datf.test,sCV,iSplit=1, fDet=F,outFile="") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{my.datf.train}{the training data set; an object where samples are in rows and features are in columns. The first column should be a numeric or factor vector containing the outcome for each sample.} \item{my.datf.test}{the test data set.} \item{sCV}{A string or a character vector specifying which resampling method to use. See details below.} \item{iSplit}{a number indicating from which splitting of the data, the above train and test sets are derived. The default value is 1.} \item{fDet}{A logical value for saving model statistics; the default value is FALSE. See below for details.} \item{outFile}{A string specifying the output file (could include path) for details (fDet=TRUE).} } \details{ RMSE is the summary metric used to select the optimal model. RFE Random Forest uses the RFE function of caret and combined with the rffunctions and randomForest modeling to obtaing the best model with the best feature set. To control the computational nuances of the train function, trainControl is used; number of folds or resampling iterations is set to 10, and the number of completed set of folds is set to 10 (for repeated k-fold cross-validation). sCV can take the following values: boot, boot632, cv, repeatedcv, LOOCV, LGOCV (for repeated training/test splits), none (only fits one model to the entire training set), oob (only for random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models), "adaptive_cv", "adaptive_boot" or "adaptive_LGOCV". If fDet=TRUE, the following output is produced: a CSV file with detailed statistics about the regression model (Regression method, splitting number, cross-validation type, Training set summary, Test set summary, Fitting summary, List of predictors, Training predictors, Test predictors, resampling statistics, features importance, residuals of the fitted model, assessment of applicability domain (leverage analysis, Cook`s distances, points influence)), 5-12 plots for fitting statistics as a PDF file for each splitting and cross-validation method (Training Yobs-Ypred, Test Yobs-Ypred, Feature Importance, Fitted vs. Residuals for Fitted Model, Leverage for Fitted Model, Cook`s Distance for Fitted Model, 6 standard fitting plots using plot function with cutoff.Cook). } \value{ A list is returned containing: \item{stat.values}{model`s statistics} \item{model}{the full rferf model, i.e. a list of class train} } \examples{ \dontrun{ fDet <- FALSE iSeed <- i # the fraction of training set from the entire dataset; trainFrac <- 0.75 # dataset folder for input and output files PathDataSet <- 'DataResults' # upload data set ds <- read.csv(ds.Housing,header=T) # split the data into training and test sets dsList <- DsSplit(ds,trainFrac,fDet,PathDataSet,iSeed) ds.train<- dsList$train ds.test <- dsList$test # types of cross-validation methods CVtypes <- c('repeatedcv','LOOCV') outLM<- 'RFRFEoutput.csv' RFRFE.fit <- RFRFEreg(ds.train,ds.test,CVtypes[1],iSplit=1,fDet=F,outFile=outLM) } } \author{ Jose A. Seoane, Carlos Fernandez-Lozano }
4,519
bsd-2-clause
9d4829d8a171238ff83b22a636bd614581aa1c27
jpshanno/ecoFlux
man/time_seq.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/Helper_Functions.R \name{time_seq} \alias{time_seq} \title{Create time sequence with custom intervals} \usage{ time_seq(start, end, step, units = "days") } \arguments{ \item{start}{a start date or time as a character string ("yyyy-mm-dd", "yyyy-mm-dd HH:MM", "yyyy-mm-dd HH:MM:SS)} \item{end}{a start date or time as a character string ("yyyy-mm-dd", "yyyy-mm-dd HH:MM", "yyyy-mm-dd HH:MM:SS)} \item{step}{a number representing the length of time in units between each step} \item{units}{the units of step: "seconds", "minutes", "days", "months", "years"} } \value{ A POSIXt vector } \description{ Creates a POSIXt vector from the start time to the end time with N number of steps, based on the interval length choosen. } \examples{ time_seq("2016-11-01", "2016-11-03", step = 15, units = "minutes") }
883
gpl-2.0
066283e8b31946018ef78e0a7109f852327b0171
muntisa/RRegrs
RRegrs/man/RFRFEreg.Rd
\name{RFRFEreg} \alias{RFRFEreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Recursive Feature Elimination - Random Forest Models. } \description{ RFRFEreg fits RFE RF regression models and returns resampling based performance measure using the train function by caret package. } \usage{ RFRFEreg(my.datf.train,my.datf.test,sCV,iSplit=1, fDet=F,outFile="") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{my.datf.train}{the training data set; an object where samples are in rows and features are in columns. The first column should be a numeric or factor vector containing the outcome for each sample.} \item{my.datf.test}{the test data set.} \item{sCV}{A string or a character vector specifying which resampling method to use. See details below.} \item{iSplit}{a number indicating from which splitting of the data, the above train and test sets are derived. The default value is 1.} \item{fDet}{A logical value for saving model statistics; the default value is FALSE. See below for details.} \item{outFile}{A string specifying the output file (could include path) for details (fDet=TRUE).} } \details{ RMSE is the summary metric used to select the optimal model. RFE Random Forest uses the RFE function of caret and combined with the rffunctions and randomForest modeling to obtaing the best model with the best feature set. To control the computational nuances of the train function, trainControl is used; number of folds or resampling iterations is set to 10, and the number of completed set of folds is set to 10 (for repeated k-fold cross-validation). sCV can take the following values: boot, boot632, cv, repeatedcv, LOOCV, LGOCV (for repeated training/test splits), none (only fits one model to the entire training set), oob (only for random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models), "adaptive_cv", "adaptive_boot" or "adaptive_LGOCV". If fDet=TRUE, the following output is produced: a CSV file with detailed statistics about the regression model (Regression method, splitting number, cross-validation type, Training set summary, Test set summary, Fitting summary, List of predictors, Training predictors, Test predictors, resampling statistics, features importance, residuals of the fitted model, assessment of applicability domain (leverage analysis, Cook`s distances, points influence)), 5-12 plots for fitting statistics as a PDF file for each splitting and cross-validation method (Training Yobs-Ypred, Test Yobs-Ypred, Feature Importance, Fitted vs. Residuals for Fitted Model, Leverage for Fitted Model, Cook`s Distance for Fitted Model, 6 standard fitting plots using plot function with cutoff.Cook). } \value{ A list is returned containing: \item{stat.values}{model`s statistics} \item{model}{the full rferf model, i.e. a list of class train} } \examples{ \dontrun{ fDet <- FALSE iSeed <- i # the fraction of training set from the entire dataset; trainFrac <- 0.75 # dataset folder for input and output files PathDataSet <- 'DataResults' # upload data set ds <- read.csv(ds.Housing,header=T) # split the data into training and test sets dsList <- DsSplit(ds,trainFrac,fDet,PathDataSet,iSeed) ds.train<- dsList$train ds.test <- dsList$test # types of cross-validation methods CVtypes <- c('repeatedcv','LOOCV') outLM<- 'RFRFEoutput.csv' RFRFE.fit <- RFRFEreg(ds.train,ds.test,CVtypes[1],iSplit=1,fDet=F,outFile=outLM) } } \author{ Jose A. Seoane, Carlos Fernandez-Lozano }
4,519
bsd-2-clause
066283e8b31946018ef78e0a7109f852327b0171
egonw/RRegrs
RRegrs/man/RFRFEreg.Rd
\name{RFRFEreg} \alias{RFRFEreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Recursive Feature Elimination - Random Forest Models. } \description{ RFRFEreg fits RFE RF regression models and returns resampling based performance measure using the train function by caret package. } \usage{ RFRFEreg(my.datf.train,my.datf.test,sCV,iSplit=1, fDet=F,outFile="") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{my.datf.train}{the training data set; an object where samples are in rows and features are in columns. The first column should be a numeric or factor vector containing the outcome for each sample.} \item{my.datf.test}{the test data set.} \item{sCV}{A string or a character vector specifying which resampling method to use. See details below.} \item{iSplit}{a number indicating from which splitting of the data, the above train and test sets are derived. The default value is 1.} \item{fDet}{A logical value for saving model statistics; the default value is FALSE. See below for details.} \item{outFile}{A string specifying the output file (could include path) for details (fDet=TRUE).} } \details{ RMSE is the summary metric used to select the optimal model. RFE Random Forest uses the RFE function of caret and combined with the rffunctions and randomForest modeling to obtaing the best model with the best feature set. To control the computational nuances of the train function, trainControl is used; number of folds or resampling iterations is set to 10, and the number of completed set of folds is set to 10 (for repeated k-fold cross-validation). sCV can take the following values: boot, boot632, cv, repeatedcv, LOOCV, LGOCV (for repeated training/test splits), none (only fits one model to the entire training set), oob (only for random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models), "adaptive_cv", "adaptive_boot" or "adaptive_LGOCV". If fDet=TRUE, the following output is produced: a CSV file with detailed statistics about the regression model (Regression method, splitting number, cross-validation type, Training set summary, Test set summary, Fitting summary, List of predictors, Training predictors, Test predictors, resampling statistics, features importance, residuals of the fitted model, assessment of applicability domain (leverage analysis, Cook`s distances, points influence)), 5-12 plots for fitting statistics as a PDF file for each splitting and cross-validation method (Training Yobs-Ypred, Test Yobs-Ypred, Feature Importance, Fitted vs. Residuals for Fitted Model, Leverage for Fitted Model, Cook`s Distance for Fitted Model, 6 standard fitting plots using plot function with cutoff.Cook). } \value{ A list is returned containing: \item{stat.values}{model`s statistics} \item{model}{the full rferf model, i.e. a list of class train} } \examples{ \dontrun{ fDet <- FALSE iSeed <- i # the fraction of training set from the entire dataset; trainFrac <- 0.75 # dataset folder for input and output files PathDataSet <- 'DataResults' # upload data set ds <- read.csv(ds.Housing,header=T) # split the data into training and test sets dsList <- DsSplit(ds,trainFrac,fDet,PathDataSet,iSeed) ds.train<- dsList$train ds.test <- dsList$test # types of cross-validation methods CVtypes <- c('repeatedcv','LOOCV') outLM<- 'RFRFEoutput.csv' RFRFE.fit <- RFRFEreg(ds.train,ds.test,CVtypes[1],iSplit=1,fDet=F,outFile=outLM) } } \author{ Jose A. Seoane, Carlos Fernandez-Lozano }
4,519
bsd-2-clause
9db9839531d77ad456b1de6672ddd12dbf3cf820
IQSS/Zelig4
man/is.zelig.package.Rd
\name{is.zelig.package} \alias{is.zelig.package} \title{Wether an Installed R-Pack Depends on Zelig} \usage{ is.zelig.package(package = "") } \arguments{ \item{package}{a character-string naming a package} } \value{ whether this package depends on Zelig } \description{ Wether an Installed R-Pack Depends on Zelig } \note{ This package was used internally to determine whether an R-package is Zelig compliant, but is now likely deprecated. This test is useless if not paired with }
497
gpl-2.0
9eef49e683fdfe92fe8936bfd510327e7fd290e5
cran/lossDev
man/rateOfDecay-comma-BreakAnnualAggLossDevModelOutput-dash-method.Rd
\name{rateOfDecay,BreakAnnualAggLossDevModelOutput-method} \alias{rateOfDecay,BreakAnnualAggLossDevModelOutput-method} \title{A method to plot and/or return the esimtated rate of decay vs development year time for break models.} \description{A method to plot and/or return the esimtated rate of decay vs development year time for break models.} \details{The simplest definition of the rate of decay is the exponentiated first difference of the \link[=consumptionPath]{consumption path}. The break model has two rates of decay. One which applies to exposure years prior to a structural break. And another which applies after the break. This is a method to allow for the retrieval and illustration of these rates of decay. Because the model is Bayesian, the estimated rates of decay come as distributions; only the medians are plotted and/or returned.} \value{Mainly called for the side effect of plotting. Also returns the plotted statistics. Returned invisibly.} \docType{methods} \seealso{\code{\link{rateOfDecay}} \code{\link[=rateOfDecay,StandardAnnualAggLossDevModelOutput-method]{rateOfDecay("StandardAnnualAggLossDevModelOutput")}} \code{\link{consumptionPath}}} \arguments{\item{object}{The object from which to plot and/or return the estimated rate of decay.} \item{plot}{A logical value. If \code{TRUE}, then the plot is generated and the statistics are returned; otherwise only the statistics are returned.}}
1,424
gpl-3.0
cbc85aee136817989fbdc64f9d7f49bdc7e038e6
mirror/r
src/library/stats/man/pp.test.Rd
% File src/library/stats/man/pp.test.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2008 R Core Team % Distributed under GPL 2 or later \name{PP.test} \alias{PP.test} \title{Phillips-Perron Test for Unit Roots} \usage{ PP.test(x, lshort = TRUE) } \arguments{ \item{x}{a numeric vector or univariate time series.} \item{lshort}{a logical indicating whether the short or long version of the truncation lag parameter is used.} } \description{ Computes the Phillips-Perron test for the null hypothesis that \code{x} has a unit root against a stationary alternative. } \details{ The general regression equation which incorporates a constant and a linear trend is used and the corrected t-statistic for a first order autoregressive coefficient equals one is computed. To estimate \code{sigma^2} the Newey-West estimator is used. If \code{lshort} is \code{TRUE}, then the truncation lag parameter is set to \code{trunc(4*(n/100)^0.25)}, otherwise \code{trunc(12*(n/100)^0.25)} is used. The p-values are interpolated from Table 4.2, page 103 of Banerjee \emph{et al} (1993). Missing values are not handled. } \value{ A list with class \code{"htest"} containing the following components: \item{statistic}{the value of the test statistic.} \item{parameter}{the truncation lag parameter.} \item{p.value}{the p-value of the test.} \item{method}{a character string indicating what type of test was performed.} \item{data.name}{a character string giving the name of the data.} } \references{ A. Banerjee, J. J. Dolado, J. W. Galbraith, and D. F. Hendry (1993) \emph{Cointegration, Error Correction, and the Econometric Analysis of Non-Stationary Data}, Oxford University Press, Oxford. P. Perron (1988) Trends and random walks in macroeconomic time series. \emph{Journal of Economic Dynamics and Control} \bold{12}, 297--332. } \author{A. Trapletti} \examples{ x <- rnorm(1000) PP.test(x) y <- cumsum(x) # has unit root PP.test(y) } \keyword{ts}
2,023
gpl-2.0
f6da763a929eb50ca46247011f4fde5054c46634
jillianderson8/cydr
man/pass_end_turns.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/PassEndTurns.R \name{pass_end_turns} \alias{pass_end_turns} \title{Identify pass-end turns} \usage{ pass_end_turns(data, remove=FALSE, short_angle=45, long_angle=178, short_offset =5, long_offset=20) } \arguments{ \item{data}{a dataframe standardized and outputted from AgLeader.} \item{remove}{a boolean. Defaults to \code{FALSE}. Indicates whether to remove identified errors.} \item{short_angle}{a number between 0 and 180. Defaults to 45. Used as the angle to determine whether an observation is within a short-turn. If the difference between compared points is greater than this angle, a short-turn is identified.} \item{long_angle}{a number between 0 and 180. Defaults to 178. Used as the angle to determine whether an observation is within a long-turn. If the difference between compared points is greater than this angle, a long-turn is identified.} \item{short_offset}{a number greater than 0 such that \code{short_offset < long_offset}. Defaults to 5. Used to determine the pair of numbers which will be compared to determine whether a short-turn is occurring. For example, a \code{short_offset} of 5 will compare the point 5 prior and the point 5 past the point of interest. If the difference in these points' directions is greater than \code{short_angle}, a short-turn is occurring.} \item{long_offset}{a number greater than 0 such that \code{short_offset < long_offset}. Defaults to 20. Used to determine the pair of numbers which will be compared to determine whether a long-turn is occurring. For example, a \code{long_offset} of 20 will compare the point 20 prior and the point 20 past the point of interest. If the difference in these points' directions is greater than \code{long_angle}, a long-turn is occurring.} } \value{ A dataframe with an added column called \code{cydr_PassEndError}. This column will be set to \code{TRUE} if it meets the criteria for an erroneous observation. } \description{ Adds a column called \code{cydr_PassEndError} to a dataframe to identify observations occurring within pass-end turns. These observations are identified by comparing two pairs of points occurring before and after the point of interest. If the difference in direction between both pairs of points is above their respective thresholds, the observation is identified as pass-end turn error. The first pair identifies whether the point is within a "short-turn", by default checking if the points 5 before and 5 after have a difference in direction equal to or greater than 45 degrees. The second pair identifies whether the point is within a "long-turn", by default checking whether the points 20 before and 20 after have a difference in direction of 178 degrees or greater. The thresholds used to determine short- and long-turns can all be customized using the provided arguments. } \examples{ pass_end_turns(data) pass_end_turns(data, remove=TRUE) # Removes all identified errors pass_end_turns(data, long_angle=170) # Identifies differences in long_offset points of > 170 as erroneous. } \seealso{ Other core functions: \code{\link{narrow_passes}}, \code{\link{residual_outliers}}, \code{\link{speed}} }
3,226
gpl-3.0
cbc85aee136817989fbdc64f9d7f49bdc7e038e6
kalibera/rexp
src/library/stats/man/pp.test.Rd
% File src/library/stats/man/pp.test.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2008 R Core Team % Distributed under GPL 2 or later \name{PP.test} \alias{PP.test} \title{Phillips-Perron Test for Unit Roots} \usage{ PP.test(x, lshort = TRUE) } \arguments{ \item{x}{a numeric vector or univariate time series.} \item{lshort}{a logical indicating whether the short or long version of the truncation lag parameter is used.} } \description{ Computes the Phillips-Perron test for the null hypothesis that \code{x} has a unit root against a stationary alternative. } \details{ The general regression equation which incorporates a constant and a linear trend is used and the corrected t-statistic for a first order autoregressive coefficient equals one is computed. To estimate \code{sigma^2} the Newey-West estimator is used. If \code{lshort} is \code{TRUE}, then the truncation lag parameter is set to \code{trunc(4*(n/100)^0.25)}, otherwise \code{trunc(12*(n/100)^0.25)} is used. The p-values are interpolated from Table 4.2, page 103 of Banerjee \emph{et al} (1993). Missing values are not handled. } \value{ A list with class \code{"htest"} containing the following components: \item{statistic}{the value of the test statistic.} \item{parameter}{the truncation lag parameter.} \item{p.value}{the p-value of the test.} \item{method}{a character string indicating what type of test was performed.} \item{data.name}{a character string giving the name of the data.} } \references{ A. Banerjee, J. J. Dolado, J. W. Galbraith, and D. F. Hendry (1993) \emph{Cointegration, Error Correction, and the Econometric Analysis of Non-Stationary Data}, Oxford University Press, Oxford. P. Perron (1988) Trends and random walks in macroeconomic time series. \emph{Journal of Economic Dynamics and Control} \bold{12}, 297--332. } \author{A. Trapletti} \examples{ x <- rnorm(1000) PP.test(x) y <- cumsum(x) # has unit root PP.test(y) } \keyword{ts}
2,023
gpl-2.0
cbc85aee136817989fbdc64f9d7f49bdc7e038e6
skyguy94/R
src/library/stats/man/pp.test.Rd
% File src/library/stats/man/pp.test.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2008 R Core Team % Distributed under GPL 2 or later \name{PP.test} \alias{PP.test} \title{Phillips-Perron Test for Unit Roots} \usage{ PP.test(x, lshort = TRUE) } \arguments{ \item{x}{a numeric vector or univariate time series.} \item{lshort}{a logical indicating whether the short or long version of the truncation lag parameter is used.} } \description{ Computes the Phillips-Perron test for the null hypothesis that \code{x} has a unit root against a stationary alternative. } \details{ The general regression equation which incorporates a constant and a linear trend is used and the corrected t-statistic for a first order autoregressive coefficient equals one is computed. To estimate \code{sigma^2} the Newey-West estimator is used. If \code{lshort} is \code{TRUE}, then the truncation lag parameter is set to \code{trunc(4*(n/100)^0.25)}, otherwise \code{trunc(12*(n/100)^0.25)} is used. The p-values are interpolated from Table 4.2, page 103 of Banerjee \emph{et al} (1993). Missing values are not handled. } \value{ A list with class \code{"htest"} containing the following components: \item{statistic}{the value of the test statistic.} \item{parameter}{the truncation lag parameter.} \item{p.value}{the p-value of the test.} \item{method}{a character string indicating what type of test was performed.} \item{data.name}{a character string giving the name of the data.} } \references{ A. Banerjee, J. J. Dolado, J. W. Galbraith, and D. F. Hendry (1993) \emph{Cointegration, Error Correction, and the Econometric Analysis of Non-Stationary Data}, Oxford University Press, Oxford. P. Perron (1988) Trends and random walks in macroeconomic time series. \emph{Journal of Economic Dynamics and Control} \bold{12}, 297--332. } \author{A. Trapletti} \examples{ x <- rnorm(1000) PP.test(x) y <- cumsum(x) # has unit root PP.test(y) } \keyword{ts}
2,023
gpl-2.0
cbc85aee136817989fbdc64f9d7f49bdc7e038e6
hxfeng/R-3.1.2
src/library/stats/man/pp.test.Rd
% File src/library/stats/man/pp.test.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2008 R Core Team % Distributed under GPL 2 or later \name{PP.test} \alias{PP.test} \title{Phillips-Perron Test for Unit Roots} \usage{ PP.test(x, lshort = TRUE) } \arguments{ \item{x}{a numeric vector or univariate time series.} \item{lshort}{a logical indicating whether the short or long version of the truncation lag parameter is used.} } \description{ Computes the Phillips-Perron test for the null hypothesis that \code{x} has a unit root against a stationary alternative. } \details{ The general regression equation which incorporates a constant and a linear trend is used and the corrected t-statistic for a first order autoregressive coefficient equals one is computed. To estimate \code{sigma^2} the Newey-West estimator is used. If \code{lshort} is \code{TRUE}, then the truncation lag parameter is set to \code{trunc(4*(n/100)^0.25)}, otherwise \code{trunc(12*(n/100)^0.25)} is used. The p-values are interpolated from Table 4.2, page 103 of Banerjee \emph{et al} (1993). Missing values are not handled. } \value{ A list with class \code{"htest"} containing the following components: \item{statistic}{the value of the test statistic.} \item{parameter}{the truncation lag parameter.} \item{p.value}{the p-value of the test.} \item{method}{a character string indicating what type of test was performed.} \item{data.name}{a character string giving the name of the data.} } \references{ A. Banerjee, J. J. Dolado, J. W. Galbraith, and D. F. Hendry (1993) \emph{Cointegration, Error Correction, and the Econometric Analysis of Non-Stationary Data}, Oxford University Press, Oxford. P. Perron (1988) Trends and random walks in macroeconomic time series. \emph{Journal of Economic Dynamics and Control} \bold{12}, 297--332. } \author{A. Trapletti} \examples{ x <- rnorm(1000) PP.test(x) y <- cumsum(x) # has unit root PP.test(y) } \keyword{ts}
2,023
gpl-2.0
b47754e22e2c4d2c77382de9ab42c5f48e7b61a4
radfordneal/pqR
src/library/graphics/man/stripchart.Rd
% File src/library/graphics/man/stripchart.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{stripchart} \title{1-D Scatter Plots} \alias{stripchart} \alias{stripchart.default} \alias{stripchart.formula} \description{ \code{stripchart} produces one dimensional scatter plots (or dot plots) of the given data. These plots are a good alternative to \code{\link{boxplot}}s when sample sizes are small. } \usage{ stripchart(x, \dots) \method{stripchart}{formula}(x, data = NULL, dlab = NULL, \dots, subset, na.action = NULL) \method{stripchart}{default}(x, method = "overplot", jitter = 0.1, offset = 1/3, vertical = FALSE, group.names, add = FALSE, at = NULL, xlim = NULL, ylim = NULL, ylab=NULL, xlab=NULL, dlab="", glab="", log = "", pch = 0, col = par("fg"), cex = par("cex"), axes = TRUE, frame.plot = axes, \dots) } \arguments{ \item{x}{the data from which the plots are to be produced. In the default method the data can be specified as a single numeric vector, or as list of numeric vectors, each corresponding to a component plot. In the \code{formula} method, a symbolic specification of the form \code{y ~ g} can be given, indicating the observations in the vector \code{y} are to be grouped according to the levels of the factor \code{g}. \code{NA}s are allowed in the data.} \item{data}{a data.frame (or list) from which the variables in \code{x} should be taken.} \item{subset}{an optional vector specifying a subset of observations to be used for plotting.} \item{na.action}{a function which indicates what should happen when the data contain \code{NA}s. The default is to ignore missing values in either the response or the group.} \item{\dots}{additional parameters passed to the default method, or by it to \code{plot}, \code{points}, \code{axis} and \code{title} to control the appearance of the plot.} \item{method}{the method to be used to separate coincident points. The default method \code{"overplot"} causes such points to be overplotted, but it is also possible to specify \code{"jitter"} to jitter the points, or \code{"stack"} have coincident points stacked. The last method only makes sense for very granular data.} \item{jitter}{when \code{method="jitter"} is used, \code{jitter} gives the amount of jittering applied.} \item{offset}{when stacking is used, points are stacked this many line-heights (symbol widths) apart.} \item{vertical}{when vertical is \code{TRUE} the plots are drawn vertically rather than the default horizontal.} \item{group.names}{group labels which will be printed alongside (or underneath) each plot.} \item{add}{logical, if true \emph{add} the chart to the current plot.} \item{at}{numeric vector giving the locations where the charts should be drawn, particularly when \code{add = TRUE}; defaults to \code{1:n} where \code{n} is the number of boxes.} \item{ylab, xlab}{labels: see \code{\link{title}}.} \item{dlab, glab}{alternate way to specify axis labels: see \sQuote{Details}.} \item{xlim, ylim}{plot limits: see \code{\link{plot.window}}.} \item{log}{on which axes to use a log scale: see \code{\link{plot.default}}} \item{pch, col, cex}{Graphical parameters: see \code{\link{par}}.} \item{axes, frame.plot}{Axis control: see \code{\link{plot.default}}} } \details{ Extensive examples of the use of this kind of plot can be found in Box, Hunter and Hunter or Seber and Wild. The \code{dlab} and \code{glab} labels may be used instead of \code{xlab} and \code{ylab} if those are not specified. \code{dlab} applies to the continuous data axis (the X axis unless \code{vertical} is \code{TRUE}), \code{glab} to the group axis. } \examples{ x <- stats::rnorm(50) xr <- round(x, 1) stripchart(x) ; m <- mean(par("usr")[1:2]) text(m, 1.04, "stripchart(x, \"overplot\")") stripchart(xr, method = "stack", add = TRUE, at = 1.2) text(m, 1.35, "stripchart(round(x,1), \"stack\")") stripchart(xr, method = "jitter", add = TRUE, at = 0.7) text(m, 0.85, "stripchart(round(x,1), \"jitter\")") stripchart(decrease ~ treatment, main = "stripchart(OrchardSprays)", vertical = TRUE, log = "y", data = OrchardSprays) stripchart(decrease ~ treatment, at = c(1:8)^2, main = "stripchart(OrchardSprays)", vertical = TRUE, log = "y", data = OrchardSprays) } \keyword{hplot}
4,544
gpl-2.0
2fa55ff00005da1f59ec74a1a7d03b03792a146f
wStockhausen/rsimTCSAM
man/calcStateTransitionMatrices.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/calcStateTransitionMatrices.R \name{calcStateTransitionMatrices} \alias{calcStateTransitionMatrices} \title{Calculate the state transition matrices for a single sex.} \usage{ calcStateTransitionMatrices(mc, S1_msz, P_sz, Th_sz, T_szz, S2_msz) } \arguments{ \item{mc}{- model configuration list object} \item{S1_msz}{- 3d array of pr(survival) from start of year to mating/molting by maturity state, shell condition, size class} \item{P_sz}{- 2d array of the probability by size class of molting for immature crab, by shell condition} \item{Th_sz}{- 2d array with pr(molt to maturity|size, molt) for immature crab by shell condition} \item{T_szz}{- 3d array with size transition matrix for growth by immature crab by shell condition} \item{S2_msz}{- 3d array of pr(survival) from mating/molting to end of year by maturity state, shell condition, size class} } \value{ list with state transition matrices as named elements: A : imm, new -> imm, new B : imm, old -> imm, new C : imm, new -> imm, old D : imm, old -> imm, old E : imm, new -> mat, new F : imm, old -> mat, new G : mat, new -> mat, old H : mat, old -> mat, old } \description{ Function to calculate the state transition matrices for a single sex. }
1,302
mit
309762b2b015d10cc4012f750ed9fb61372c8f9f
surbut/matrix_ash
man/compute.hm.train.bma.only.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/main.R \name{compute.hm.train.bma.only} \alias{compute.hm.train.bma.only} \title{compute.hm.train.bma.only} \usage{ compute.hm.train.bma.only(train.b, se.train, covmat, A) }
252
gpl-2.0
17febd4a716342e5c690ba376acbad870043a8b9
pniessen/Segment_Helper
web/weights 2/man/wtd.chi.sq.Rd
\name{wtd.chi.sq} \alias{wtd.chi.sq} \title{ Produces weighted chi-squared tests. } \description{ \code{wtd.chi.sq} produces weighted chi-squared tests for two- and three-variable contingency tables. Decomposes parts of three-variable contingency tables as well. Note that weights run with the default parameters here treat the weights as an estimate of the precision of the information. A prior version of this software was set to default to \code{mean1=FALSE}.} \usage{ wtd.chi.sq(var1, var2, var3=NULL, weight=NULL, na.rm=TRUE, drop.missing.levels=TRUE, mean1=TRUE) } \arguments{ \item{var1}{ \code{var1} is a vector of values which the researcher would like to use to divide any data set. } \item{var2}{ \code{var2} is a vector of values which the researcher would like to use to divide any data set. } \item{var3}{ \code{var3} is an optional additional vector of values which the researcher would like to use to divide any data set. } \item{weight}{ \code{weight} is an optional vector of weights to be used to determine the weighted chi-squared for all analyses. } \item{na.rm}{ \code{na.rm} removes missing data from analyses. } \item{drop.missing.levels}{ \code{drop.missing.levels} drops missing levels from variables. } \item{mean1}{ \code{mean1} is an optional parameter for determining whether the weights should be forced to have an average value of 1. If this is set as false, the weighted correlations will be produced with the assumption that the true N of the data is equivalent to the sum of the weights. } } \value{ A two-way chi-squared produces a vector including a single chi-squared value, degrees of freedom measure, and p-value for each analysis. A three-way chi-squared produces a matrix with a single chi-squared value, degrees of freedom measure, and p-value for each of seven analyses. These include: (1) the values using a three-way contingency table, (2) the values for a two-way contingency table with each pair of variables, and (3) assessments for whether the relations between each pair of variables are significantly different across levels of the third variable. } \author{ Josh Pasek, Assistant Professor of Communication Studies at the University of Michigan (www.joshpasek.com). } \seealso{ \code{\link{wtd.cor}} \code{\link{wtd.t.test}} } \examples{ var1 <- c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3) var2 <- c(1,1,2,2,3,3,1,1,2,2,3,3,1,1,2) var3 <- c(1,2,3,1,2,3,1,2,3,1,2,3,1,2,3) weight <- c(.5,.5,.5,.5,.5,1,1,1,1,1,2,2,2,2,2) wtd.chi.sq(var1, var2) wtd.chi.sq(var1, var2, weight=weight) wtd.chi.sq(var1, var2, var3) wtd.chi.sq(var1, var2, var3, weight=weight) } \keyword{ ~contingency tables } \keyword{ ~chisquared } \keyword{ ~decompose}
2,695
mit
83e4b5c56b3dc656d999529e3675e3edd898d61f
dieterich-lab/JACUSA
JacusaHelper/man/JacusaHelper.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/common.R \docType{package} \name{JacusaHelper} \alias{JacusaHelper} \alias{JacusaHelper-package} \title{JacusaHelper: A package for post-processing JACUSA result files.} \description{ The JacusaHelper package provides three categories of important functions: InputOutput, AddInfo, and Filter. } \details{ When calling RDDs where the RNA-Seq sample has been generated with stranded sequencing library, base change(s) can be directly inferred and if necessary base calls can be inverted. Per default DNA need to be provided as sample 1 and cDNA as sample 2! Warning: Some function do not support replicates or are exclusively applicable on RDD or RRD result files! use l <- Read(l) to read the data and then sample <- Samples(l, 1) to extract the sample specific data then continue with ToMatrix(sample) } \section{InputOutput functions}{ The functions Read, and Write facilitate input and output operations on JACUSA output files. See: \itemize{ \item Read \item Write } } \section{AddInfo functions}{ This functions calculate and add additional information such as read depth or base changes. This includes functions convert base counts that are encoded as character vectors to base count matrices. See: \itemize{ \item AddCoverageInfo \item AddBaseInfo \item AddBaseChangeInfo \item AddEditingFreqInfo } } \section{Filter functions}{ This function set enables processing of JACUSA output files such as filtering by read coverage or enforcing a minimal number of variant base calls per sample. See: \itemize{ \item FilterByCoverage \item FilterByStat \item FilterResult \item FilterByMinVariantCount } }
1,716
gpl-3.0
1cc41351ff9ad36382c8b7b0e92d3a3594701ce5
nickmckay/LiPD-utilities
R/deprecated/man/merge_csv_table.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/jsons_merge.R \name{merge_csv_table} \alias{merge_csv_table} \title{Merge CSV data into each table} \usage{ merge_csv_table(tables, crumbs, csvs) } \arguments{ \item{char}{crumbs: Crumbs} \item{list}{csvs: CSV data} } \value{ list models: Metadata } \description{ Merge CSV data into each table } \keyword{internal}
395
gpl-2.0
4d9c7be2cb1bd59c67572b7233b375fe28243b38
mm0hgw/ballot
ballot/man/regionTag.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/regionTag.R \name{regionTag} \alias{regionTag} \title{regionTag} \usage{ regionTag(x, rTitle = NULL, rLayerTag = NULL, rMask = NULL) } \arguments{ \item{x}{a 'regionTag' or 'character'} \item{rTitle}{a 'character' title} \item{rLayerTag}{a 'layerTag' key} \item{rMask}{a 'vector' defining the subset of the layerTag layer that forms the region.} } \description{ regionTag }
455
gpl-2.0
17febd4a716342e5c690ba376acbad870043a8b9
pniessen/Segment_Helper
web/weights/man/wtd.chi.sq.Rd
\name{wtd.chi.sq} \alias{wtd.chi.sq} \title{ Produces weighted chi-squared tests. } \description{ \code{wtd.chi.sq} produces weighted chi-squared tests for two- and three-variable contingency tables. Decomposes parts of three-variable contingency tables as well. Note that weights run with the default parameters here treat the weights as an estimate of the precision of the information. A prior version of this software was set to default to \code{mean1=FALSE}.} \usage{ wtd.chi.sq(var1, var2, var3=NULL, weight=NULL, na.rm=TRUE, drop.missing.levels=TRUE, mean1=TRUE) } \arguments{ \item{var1}{ \code{var1} is a vector of values which the researcher would like to use to divide any data set. } \item{var2}{ \code{var2} is a vector of values which the researcher would like to use to divide any data set. } \item{var3}{ \code{var3} is an optional additional vector of values which the researcher would like to use to divide any data set. } \item{weight}{ \code{weight} is an optional vector of weights to be used to determine the weighted chi-squared for all analyses. } \item{na.rm}{ \code{na.rm} removes missing data from analyses. } \item{drop.missing.levels}{ \code{drop.missing.levels} drops missing levels from variables. } \item{mean1}{ \code{mean1} is an optional parameter for determining whether the weights should be forced to have an average value of 1. If this is set as false, the weighted correlations will be produced with the assumption that the true N of the data is equivalent to the sum of the weights. } } \value{ A two-way chi-squared produces a vector including a single chi-squared value, degrees of freedom measure, and p-value for each analysis. A three-way chi-squared produces a matrix with a single chi-squared value, degrees of freedom measure, and p-value for each of seven analyses. These include: (1) the values using a three-way contingency table, (2) the values for a two-way contingency table with each pair of variables, and (3) assessments for whether the relations between each pair of variables are significantly different across levels of the third variable. } \author{ Josh Pasek, Assistant Professor of Communication Studies at the University of Michigan (www.joshpasek.com). } \seealso{ \code{\link{wtd.cor}} \code{\link{wtd.t.test}} } \examples{ var1 <- c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3) var2 <- c(1,1,2,2,3,3,1,1,2,2,3,3,1,1,2) var3 <- c(1,2,3,1,2,3,1,2,3,1,2,3,1,2,3) weight <- c(.5,.5,.5,.5,.5,1,1,1,1,1,2,2,2,2,2) wtd.chi.sq(var1, var2) wtd.chi.sq(var1, var2, weight=weight) wtd.chi.sq(var1, var2, var3) wtd.chi.sq(var1, var2, var3, weight=weight) } \keyword{ ~contingency tables } \keyword{ ~chisquared } \keyword{ ~decompose}
2,695
mit
d4f8748aa1f35418d0585b7da26e3adadc7acdb4
KWB-R/kwb.wtaq
man/modelFitnessAggregated.Rd
null
326
mit
daa1d16619773985ee0ab58a2f22fdb9cef47912
cran/Zelig
man/Zelig-ls-class.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/model-ls.R \docType{class} \name{Zelig-ls-class} \alias{Zelig-ls-class} \alias{zls} \title{Least Squares Regression for Continuous Dependent Variables} \arguments{ \item{formula}{a symbolic representation of the model to be estimated, in the form \code{y ~ x1 + x2}, where \code{y} is the dependent variable and \code{x1} and \code{x2} are the explanatory variables, and \code{y}, \code{x1}, and \code{x2} are contained in the same dataset. (You may include more than two explanatory variables, of course.) The \code{+} symbol means ``inclusion'' not ``addition.'' You may also include interaction terms and main effects in the form \code{x1*x2} without computing them in prior steps; \code{I(x1*x2)} to include only the interaction term and exclude the main effects; and quadratic terms in the form \code{I(x1^2)}.} \item{model}{the name of a statistical model to estimate. For a list of other supported models and their documentation see: \url{http://docs.zeligproject.org/articles/}.} \item{data}{the name of a data frame containing the variables referenced in the formula or a list of multiply imputed data frames each having the same variable names and row numbers (created by \code{Amelia} or \code{\link{to_zelig_mi}}).} \item{...}{additional arguments passed to \code{zelig}, relevant for the model to be estimated.} \item{by}{a factor variable contained in \code{data}. If supplied, \code{zelig} will subset the data frame based on the levels in the \code{by} variable, and estimate a model for each subset. This can save a considerable amount of effort. You may also use \code{by} to run models using MatchIt subclasses.} \item{cite}{If is set to 'TRUE' (default), the model citation will be printed to the console.} } \value{ Depending on the class of model selected, \code{zelig} will return an object with elements including \code{coefficients}, \code{residuals}, and \code{formula} which may be summarized using \code{summary(z.out)} or individually extracted using, for example, \code{coef(z.out)}. See \url{http://docs.zeligproject.org/articles/getters.html} for a list of functions to extract model components. You can also extract whole fitted model objects using \code{\link{from_zelig_model}}. } \description{ Least Squares Regression for Continuous Dependent Variables } \details{ Additional parameters avaialable to this model include: \itemize{ \item \code{weights}: vector of weight values or a name of a variable in the dataset by which to weight the model. For more information see: \url{http://docs.zeligproject.org/articles/weights.html}. \item \code{bootstrap}: logical or numeric. If \code{FALSE} don't use bootstraps to robustly estimate uncertainty around model parameters due to sampling error. If an integer is supplied, the number of boostraps to run. For more information see: \url{http://docs.zeligproject.org/articles/bootstraps.html}. } } \section{Methods}{ \describe{ \item{\code{zelig(formula, data, model = NULL, ..., weights = NULL, by, bootstrap = FALSE)}}{The zelig function estimates a variety of statistical models} }} \examples{ library(Zelig) data(macro) z.out1 <- zelig(unem ~ gdp + capmob + trade, model = "ls", data = macro, cite = FALSE) summary(z.out1) } \seealso{ Vignette: \url{http://docs.zeligproject.org/articles/zelig_ls.html} }
3,405
gpl-2.0
329202135377df6370c2896e745957da5ef045d1
amsantac/TOC
man/uncertainty.Rd
\name{uncertainty} \alias{uncertainty} \title{ Uncertainty in AUC calculation } \description{ TOC internal function. It calculates uncertainty in AUC calculation } \usage{ uncertainty(index, tocd) } \arguments{ \item{index}{ index vector } \item{tocd}{ data.frame output from \code{roctable} } } \note{ This function is not meant to be called by users directly } \value{ a numeric value representing uncertainty in AUC calculation } \keyword{ spatial }
490
gpl-2.0
567b9316ffc57b4e9a5e8ced39120505bc434e3d
davidnipperess/PDcalc
man/phylocurve.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/phylocurve.R \name{phylocurve} \alias{phylocurve} \title{Generate a rarefaction curve of Phylogenetic Diversity} \usage{ phylocurve(x, phy, stepm = 1, subsampling = "individual", replace = FALSE) } \arguments{ \item{x}{is the community data given as a \code{data.frame} or \code{matrix} with species/OTUs as columns and samples/sites as rows (like in the \code{vegan} package). Columns are labelled with the names of the species/OTUs. Rows are labelled with the names of the samples/sites. Data can be either abundance or incidence (0/1). Column labels must match tip labels in the phylogenetic tree exactly!} \item{phy}{is a rooted phylogenetic tree with branch lengths stored as a phylo object (as in the \code{ape} package) with terminal nodes labelled with names matching those of the community data table. Note that the function trims away any terminal taxa not present in the community data table, so it is not necessary to do this beforehand.} \item{stepm}{is the size of the interval in a sequence of numbers of individuals, sites or species to which \code{x} is to be rarefied.} \item{subsampling}{indicates whether the subsampling will be by \code{'individual'} (default), \code{'site'} or \code{'species'}. When there are multiple sites, rarefaction by individuals or species is done by first pooling the sites.} \item{replace}{is a \code{logical} indicating whether subsampling should be done with (\code{TRUE}) or without (\code{FALSE} - default) replacement.} } \value{ a \code{matrix} object of three columns giving the expected PD values (mean and variance) for each value of \code{m} } \description{ Calculates a rarefaction curve giving expected phylogenetic diversity (mean and variance) for multiple values of sampling effort. Sampling effort can be defined in terms of the number of individuals, sites or species. Expected phylogenetic diversity is calculated using an exact analytical formulation (Nipperess & Matsen 2013) that is both more accurate and more computationally efficient than randomisation methods. } \details{ \code{phylocurve} takes community data and a rooted phylogenetic tree (with branch lengths) and calculates expected mean and variance of Phylogenetic Diversity (PD) for every specified value of \code{m} individuals, sites or species. \code{m} will range from 1 to the total number of individuals/sites/species in increments given by \code{stepm}. Calculations are done using the exact analytical formulae (Nipperess & Matsen, 2013) generalised from the classic equation of Hurlbert (1971). When there are multiple sites in the community data and rarefaction is by individuals or species, sites are first pooled. } \references{ \itemize{ \item{Hurlbert (1971) The nonconcept of Species Diversity: a critique and alternative parameters. \emph{Ecology} 52: 577-586.} \item{Nipperess & Matsen (2013) The mean and variance of phylogenetic diversity under rarefaction. \emph{Methods in Ecology & Evolution} 4: 566-572.}} }
3,091
gpl-3.0
94070b867e20f6a304d8809171526be90c19e34c
RPGOne/Skynet
xgboost-master/R-package/man/callbacks.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/callbacks.R \name{callbacks} \alias{callbacks} \title{Callback closures for booster training.} \description{ These are used to perform various service tasks either during boosting iterations or at the end. This approach helps to modularize many of such tasks without bloating the main training methods, and it offers . } \details{ By default, a callback function is run after each boosting iteration. An R-attribute \code{is_pre_iteration} could be set for a callback to define a pre-iteration function. When a callback function has \code{finalize} parameter, its finalizer part will also be run after the boosting is completed. WARNING: side-effects!!! Be aware that these callback functions access and modify things in the environment from which they are called from, which is a fairly uncommon thing to do in R. To write a custom callback closure, make sure you first understand the main concepts about R envoronments. Check either R documentation on \code{\link[base]{environment}} or the \href{http://adv-r.had.co.nz/Environments.html}{Environments chapter} from the "Advanced R" book by Hadley Wickham. Further, the best option is to read the code of some of the existing callbacks - choose ones that do something similar to what you want to achieve. Also, you would need to get familiar with the objects available inside of the \code{xgb.train} and \code{xgb.cv} internal environments. } \seealso{ \code{\link{cb.print.evaluation}}, \code{\link{cb.evaluation.log}}, \code{\link{cb.reset.parameters}}, \code{\link{cb.early.stop}}, \code{\link{cb.save.model}}, \code{\link{cb.cv.predict}}, \code{\link{xgb.train}}, \code{\link{xgb.cv}} }
1,730
bsd-3-clause
0fd1d191ed6156670544d76c2ff972ac86a50c50
glycerine/bigbird
r-3.0.2/src/library/grid/man/grid.layout.Rd
% File src/library/grid/man/grid.layout.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{grid.layout} \alias{grid.layout} \title{Create a Grid Layout} \description{ This function returns a Grid layout, which describes a subdivision of a rectangular region. } \usage{ grid.layout(nrow = 1, ncol = 1, widths = unit(rep(1, ncol), "null"), heights = unit(rep(1, nrow), "null"), default.units = "null", respect = FALSE, just="centre") } \arguments{ \item{nrow}{An integer describing the number of rows in the layout.} \item{ncol}{An integer describing the number of columns in the layout.} \item{widths}{A numeric vector or unit object describing the widths of the columns in the layout.} \item{heights}{A numeric vector or unit object describing the heights of the rows in the layout.} \item{default.units}{A string indicating the default units to use if \code{widths} or \code{heights} are only given as numeric vectors.} \item{respect}{A logical value or a numeric matrix. If a logical, this indicates whether row heights and column widths should respect each other. If a matrix, non-zero values indicate that the corresponding row and column should be respected (see examples below). } \item{just}{A string or numeric vector specifying how the layout should be justified if it is not the same size as its parent viewport. If there are two values, the first value specifies horizontal justification and the second value specifies vertical justification. Possible string values are: \code{"left"}, \code{"right"}, \code{"centre"}, \code{"center"}, \code{"bottom"}, and \code{"top"}. For numeric values, 0 means left alignment and 1 means right alignment. NOTE that in this context, \code{"left"}, for example, means align the left edge of the left-most layout column with the left edge of the parent viewport.} } \details{ The unit objects given for the \code{widths} and \code{heights} of a layout may use a special \code{units} that only has meaning for layouts. This is the \code{"null"} unit, which indicates what relative fraction of the available width/height the column/row occupies. See the reference for a better description of relative widths and heights in layouts. } \section{WARNING}{ This function must NOT be confused with the base R graphics function \code{layout}. In particular, do not use \code{layout} in combination with Grid graphics. The documentation for \code{layout} may provide some useful information and this function should behave identically in comparable situations. The \code{grid.layout} function has \emph{added} the ability to specify a broader range of units for row heights and column widths, and allows for nested layouts (see \code{viewport}). } \value{ A Grid layout object. } \references{Murrell, P. R. (1999), Layouts: A Mechanism for Arranging Plots on a Page, \emph{Journal of Computational and Graphical Statistics}, \bold{8}, 121--134.} \author{Paul Murrell} \seealso{ \link{Grid}, \code{\link{grid.show.layout}}, \code{\link{viewport}}, \code{\link{layout}}} \examples{ ## A variety of layouts (some a bit mid-bending ...) layout.torture() ## Demonstration of layout justification grid.newpage() testlay <- function(just="centre") { pushViewport(viewport(layout=grid.layout(1, 1, widths=unit(1, "inches"), heights=unit(0.25, "npc"), just=just))) pushViewport(viewport(layout.pos.col=1, layout.pos.row=1)) grid.rect() grid.text(paste(just, collapse="-")) popViewport(2) } testlay() testlay(c("left", "top")) testlay(c("right", "top")) testlay(c("right", "bottom")) testlay(c("left", "bottom")) testlay(c("left")) testlay(c("right")) testlay(c("bottom")) testlay(c("top")) } \keyword{dplot}
3,964
bsd-2-clause
0fd1d191ed6156670544d76c2ff972ac86a50c50
lajus/customr
src/library/grid/man/grid.layout.Rd
% File src/library/grid/man/grid.layout.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{grid.layout} \alias{grid.layout} \title{Create a Grid Layout} \description{ This function returns a Grid layout, which describes a subdivision of a rectangular region. } \usage{ grid.layout(nrow = 1, ncol = 1, widths = unit(rep(1, ncol), "null"), heights = unit(rep(1, nrow), "null"), default.units = "null", respect = FALSE, just="centre") } \arguments{ \item{nrow}{An integer describing the number of rows in the layout.} \item{ncol}{An integer describing the number of columns in the layout.} \item{widths}{A numeric vector or unit object describing the widths of the columns in the layout.} \item{heights}{A numeric vector or unit object describing the heights of the rows in the layout.} \item{default.units}{A string indicating the default units to use if \code{widths} or \code{heights} are only given as numeric vectors.} \item{respect}{A logical value or a numeric matrix. If a logical, this indicates whether row heights and column widths should respect each other. If a matrix, non-zero values indicate that the corresponding row and column should be respected (see examples below). } \item{just}{A string or numeric vector specifying how the layout should be justified if it is not the same size as its parent viewport. If there are two values, the first value specifies horizontal justification and the second value specifies vertical justification. Possible string values are: \code{"left"}, \code{"right"}, \code{"centre"}, \code{"center"}, \code{"bottom"}, and \code{"top"}. For numeric values, 0 means left alignment and 1 means right alignment. NOTE that in this context, \code{"left"}, for example, means align the left edge of the left-most layout column with the left edge of the parent viewport.} } \details{ The unit objects given for the \code{widths} and \code{heights} of a layout may use a special \code{units} that only has meaning for layouts. This is the \code{"null"} unit, which indicates what relative fraction of the available width/height the column/row occupies. See the reference for a better description of relative widths and heights in layouts. } \section{WARNING}{ This function must NOT be confused with the base R graphics function \code{layout}. In particular, do not use \code{layout} in combination with Grid graphics. The documentation for \code{layout} may provide some useful information and this function should behave identically in comparable situations. The \code{grid.layout} function has \emph{added} the ability to specify a broader range of units for row heights and column widths, and allows for nested layouts (see \code{viewport}). } \value{ A Grid layout object. } \references{Murrell, P. R. (1999), Layouts: A Mechanism for Arranging Plots on a Page, \emph{Journal of Computational and Graphical Statistics}, \bold{8}, 121--134.} \author{Paul Murrell} \seealso{ \link{Grid}, \code{\link{grid.show.layout}}, \code{\link{viewport}}, \code{\link{layout}}} \examples{ ## A variety of layouts (some a bit mid-bending ...) layout.torture() ## Demonstration of layout justification grid.newpage() testlay <- function(just="centre") { pushViewport(viewport(layout=grid.layout(1, 1, widths=unit(1, "inches"), heights=unit(0.25, "npc"), just=just))) pushViewport(viewport(layout.pos.col=1, layout.pos.row=1)) grid.rect() grid.text(paste(just, collapse="-")) popViewport(2) } testlay() testlay(c("left", "top")) testlay(c("right", "top")) testlay(c("right", "bottom")) testlay(c("left", "bottom")) testlay(c("left")) testlay(c("right")) testlay(c("bottom")) testlay(c("top")) } \keyword{dplot}
3,964
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
o-/Rexperiments
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
kalibera/rexp
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
0fd1d191ed6156670544d76c2ff972ac86a50c50
cxxr-devel/cxxr-svn-mirror
src/library/grid/man/grid.layout.Rd
% File src/library/grid/man/grid.layout.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{grid.layout} \alias{grid.layout} \title{Create a Grid Layout} \description{ This function returns a Grid layout, which describes a subdivision of a rectangular region. } \usage{ grid.layout(nrow = 1, ncol = 1, widths = unit(rep(1, ncol), "null"), heights = unit(rep(1, nrow), "null"), default.units = "null", respect = FALSE, just="centre") } \arguments{ \item{nrow}{An integer describing the number of rows in the layout.} \item{ncol}{An integer describing the number of columns in the layout.} \item{widths}{A numeric vector or unit object describing the widths of the columns in the layout.} \item{heights}{A numeric vector or unit object describing the heights of the rows in the layout.} \item{default.units}{A string indicating the default units to use if \code{widths} or \code{heights} are only given as numeric vectors.} \item{respect}{A logical value or a numeric matrix. If a logical, this indicates whether row heights and column widths should respect each other. If a matrix, non-zero values indicate that the corresponding row and column should be respected (see examples below). } \item{just}{A string or numeric vector specifying how the layout should be justified if it is not the same size as its parent viewport. If there are two values, the first value specifies horizontal justification and the second value specifies vertical justification. Possible string values are: \code{"left"}, \code{"right"}, \code{"centre"}, \code{"center"}, \code{"bottom"}, and \code{"top"}. For numeric values, 0 means left alignment and 1 means right alignment. NOTE that in this context, \code{"left"}, for example, means align the left edge of the left-most layout column with the left edge of the parent viewport.} } \details{ The unit objects given for the \code{widths} and \code{heights} of a layout may use a special \code{units} that only has meaning for layouts. This is the \code{"null"} unit, which indicates what relative fraction of the available width/height the column/row occupies. See the reference for a better description of relative widths and heights in layouts. } \section{WARNING}{ This function must NOT be confused with the base R graphics function \code{layout}. In particular, do not use \code{layout} in combination with Grid graphics. The documentation for \code{layout} may provide some useful information and this function should behave identically in comparable situations. The \code{grid.layout} function has \emph{added} the ability to specify a broader range of units for row heights and column widths, and allows for nested layouts (see \code{viewport}). } \value{ A Grid layout object. } \references{Murrell, P. R. (1999), Layouts: A Mechanism for Arranging Plots on a Page, \emph{Journal of Computational and Graphical Statistics}, \bold{8}, 121--134.} \author{Paul Murrell} \seealso{ \link{Grid}, \code{\link{grid.show.layout}}, \code{\link{viewport}}, \code{\link{layout}}} \examples{ ## A variety of layouts (some a bit mid-bending ...) layout.torture() ## Demonstration of layout justification grid.newpage() testlay <- function(just="centre") { pushViewport(viewport(layout=grid.layout(1, 1, widths=unit(1, "inches"), heights=unit(0.25, "npc"), just=just))) pushViewport(viewport(layout.pos.col=1, layout.pos.row=1)) grid.rect() grid.text(paste(just, collapse="-")) popViewport(2) } testlay() testlay(c("left", "top")) testlay(c("right", "top")) testlay(c("right", "bottom")) testlay(c("left", "bottom")) testlay(c("left")) testlay(c("right")) testlay(c("bottom")) testlay(c("top")) } \keyword{dplot}
3,964
gpl-2.0
6752987357c52e975d47ddbf33d37e3a6c6c2825
datadotworld/dwapi-r
man/check_user_info_response.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/user_info_response.R \name{check_user_info_response} \alias{check_user_info_response} \title{Validate \code{get_user} response object, returning the object if valid, and stopping with an error message if invalid.} \usage{ check_user_info_response(object) } \arguments{ \item{object}{Object of type \code{\link{user_info_response}}.} } \value{ the object } \description{ Validate \code{get_user} response object, returning the object if valid, and stopping with an error message if invalid. }
570
apache-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
lajus/customr
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
dfbb26fcc029dfc48a60e85f60952e32a45ba283
jeroenooms/r-source
src/library/utils/man/shortPathName.Rd
% File src/library/utils/man/shortPathName.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2018 R Core Team % Distributed under GPL 2 or later \name{shortPathName} \alias{shortPathName} \title{Express File Paths in Short Form on Windows} \description{ Convert file paths to the short form. This is an interface to the Windows API call \code{GetShortPathNameW}. } \usage{ shortPathName(path) } \arguments{ \item{path}{character vector of file paths.} } % http://msdn.microsoft.com/en-gb/library/windows/desktop/aa364989%28v=vs.85%29.aspx \details{ For most file systems, the short form is the \sQuote{DOS} form with 8+3 path components and no spaces, and this used to be guaranteed. But some file systems on recent versions of Windows do not have short path names when the long-name path will be returned instead. } \value{ A character vector. The path separator will be \code{\\}. If a file path does not exist, the supplied path will be returned with slashes replaced by backslashes. } \note{ This is only available on Windows. } \seealso{ \code{\link{normalizePath}}. } \examples{% (spacing: for nice rendering of visual part of example) if(.Platform$OS.type == "windows") withAutoprint({ \donttest{ cat(shortPathName(c(R.home(), tempdir())), sep = "\n")} \dontshow{ cat(shortPathName(R.home()), sep = "\n")} }) } \keyword{ utilities }
1,392
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
jeffreyhorner/R-Array-Hash
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
dfbb26fcc029dfc48a60e85f60952e32a45ba283
reactorlabs/gnur
src/library/utils/man/shortPathName.Rd
% File src/library/utils/man/shortPathName.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2018 R Core Team % Distributed under GPL 2 or later \name{shortPathName} \alias{shortPathName} \title{Express File Paths in Short Form on Windows} \description{ Convert file paths to the short form. This is an interface to the Windows API call \code{GetShortPathNameW}. } \usage{ shortPathName(path) } \arguments{ \item{path}{character vector of file paths.} } % http://msdn.microsoft.com/en-gb/library/windows/desktop/aa364989%28v=vs.85%29.aspx \details{ For most file systems, the short form is the \sQuote{DOS} form with 8+3 path components and no spaces, and this used to be guaranteed. But some file systems on recent versions of Windows do not have short path names when the long-name path will be returned instead. } \value{ A character vector. The path separator will be \code{\\}. If a file path does not exist, the supplied path will be returned with slashes replaced by backslashes. } \note{ This is only available on Windows. } \seealso{ \code{\link{normalizePath}}. } \examples{% (spacing: for nice rendering of visual part of example) if(.Platform$OS.type == "windows") withAutoprint({ \donttest{ cat(shortPathName(c(R.home(), tempdir())), sep = "\n")} \dontshow{ cat(shortPathName(R.home()), sep = "\n")} }) } \keyword{ utilities }
1,392
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
jeffreyhorner/R-Judy-Arrays
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
cxxr-devel/cxxr-svn-mirror
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
skyguy94/R
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
glycerine/bigbird
r-3.0.2/src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
bsd-2-clause
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
mirror/r
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
dfbb26fcc029dfc48a60e85f60952e32a45ba283
wch/r-source
src/library/utils/man/shortPathName.Rd
% File src/library/utils/man/shortPathName.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2018 R Core Team % Distributed under GPL 2 or later \name{shortPathName} \alias{shortPathName} \title{Express File Paths in Short Form on Windows} \description{ Convert file paths to the short form. This is an interface to the Windows API call \code{GetShortPathNameW}. } \usage{ shortPathName(path) } \arguments{ \item{path}{character vector of file paths.} } % http://msdn.microsoft.com/en-gb/library/windows/desktop/aa364989%28v=vs.85%29.aspx \details{ For most file systems, the short form is the \sQuote{DOS} form with 8+3 path components and no spaces, and this used to be guaranteed. But some file systems on recent versions of Windows do not have short path names when the long-name path will be returned instead. } \value{ A character vector. The path separator will be \code{\\}. If a file path does not exist, the supplied path will be returned with slashes replaced by backslashes. } \note{ This is only available on Windows. } \seealso{ \code{\link{normalizePath}}. } \examples{% (spacing: for nice rendering of visual part of example) if(.Platform$OS.type == "windows") withAutoprint({ \donttest{ cat(shortPathName(c(R.home(), tempdir())), sep = "\n")} \dontshow{ cat(shortPathName(R.home()), sep = "\n")} }) } \keyword{ utilities }
1,392
gpl-2.0
374949dc1d3e3e7944c56ae5491da5ce60d74ff6
hxfeng/R-3.1.2
src/library/stats/man/convolve.Rd
% File src/library/stats/man/convolve.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2007 R Core Team % Distributed under GPL 2 or later \name{convolve} \alias{convolve} \title{Convolution of Sequences via FFT} \description{ Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. } \usage{ convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) } \arguments{ \item{x, y}{numeric sequences \emph{of the same length} to be convolved.} \item{conj}{logical; if \code{TRUE}, take the complex \emph{conjugate} before back-transforming (default, and used for usual convolution).} \item{type}{character; one of \code{"circular"}, \code{"open"}, \code{"filter"} (beginning of word is ok). For \code{circular}, the two sequences are treated as \emph{circular}, i.e., periodic. For \code{open} and \code{filter}, the sequences are padded with \code{0}s (from left and right) first; \code{"filter"} returns the middle sub-vector of \code{"open"}, namely, the result of running a weighted mean of \code{x} with weights \code{y}.} } \details{ The Fast Fourier Transform, \code{\link{fft}}, is used for efficiency. The input sequences \code{x} and \code{y} must have the same length if \code{circular} is true. Note that the usual definition of convolution of two sequences \code{x} and \code{y} is given by \code{convolve(x, rev(y), type = "o")}. } \value{ If \code{r <- convolve(x, y, type = "open")} and \code{n <- length(x)}, \code{m <- length(y)}, then \deqn{r_k = \sum_{i} x_{k-m+i} y_{i}}{r[k] = sum(i; x[k-m+i] * y[i])} where the sum is over all valid indices \eqn{i}, for \eqn{k = 1, \dots, n+m-1}. If \code{type == "circular"}, \eqn{n = m} is required, and the above is true for \eqn{i , k = 1,\dots,n} when \eqn{x_{j} := x_{n+j}}{x[j] := x[n+j]} for \eqn{j < 1}. } \references{ Brillinger, D. R. (1981) \emph{Time Series: Data Analysis and Theory}, Second Edition. San Francisco: Holden-Day. } \seealso{\code{\link{fft}}, \code{\link{nextn}}, and particularly \code{\link{filter}} (from the \pkg{stats} package) which may be more appropriate. } \examples{ require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") } \keyword{math} \keyword{dplot}
2,883
gpl-2.0
ad8ab78d93d61590ab2f6cc787a5158b748a6129
BlackEdder/mcmcsample
man/inside.ci.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/credible.R \name{inside.ci} \alias{inside.ci} \title{Calculate samples within credibility region} \usage{ inside.ci(samples, ci = 0.9, method = "bin", ...) } \arguments{ \item{samples}{Data frame holding the posterior samples. Each row is a sample, each column a parameter in the sample} \item{ci}{Minimum fraction the credibility region should cover} \item{method}{Method to use. Currently bin, chull and minmax are supported} \item{...}{Parameters forwarded to the method used for calculating the regions} } \value{ A boolean vector, with true for samples inside the credibility region } \description{ Calculate which samples will fall inside a credibility region and which outside. }
769
gpl-3.0
963cbad9e380b0c7de6cb51b09a3005270d1f117
jmp75/metaheuristics
R/pkgs/mh/man/loadMhLog.Rd
% Generated by roxygen2 (4.1.1): do not edit by hand % Please edit documentation in R/visualisation.r \name{loadMhLog} \alias{loadMhLog} \title{Load a CSV log file of an optimisation} \usage{ loadMhLog(fn) } \arguments{ \item{fn}{the file name of the CSV} } \value{ a data frame, as loaded with read.csv, and an added column 'PointNumber' } \description{ Load a CSV log file of an optimisation }
397
lgpl-2.1
fdede4de9f6d0874bf3331f7412aae6868350b08
jread-usgs/repgen
man/json.Rd
% Generated by roxygen2 (4.1.1): do not edit by hand % Please edit documentation in R/utils-json.R \name{json} \alias{json} \title{Import a JSON file to use for report} \usage{ json(file) } \arguments{ \item{file}{incoming json file} } \description{ Import a JSON file to use for report }
290
cc0-1.0
0b7a1f0ae7fd583d4663875a680a2d18d394024c
prafols/rMSI
man/insertRasterImageAtCols.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/libimgramdisk.R \name{insertRasterImageAtCols} \alias{insertRasterImageAtCols} \title{Inserts a image at specified Cols of a rMSI object.} \usage{ insertRasterImageAtCols(Img, Cols, raster_matrix) } \arguments{ \item{Img}{the rMSI object where the data is stored (ramdisk).} \item{Cols}{the columns indexes from which data will be inserted} \item{raster_matrix}{a raster image represented as a matrix with pixel values.} } \description{ A raster image provided as a matrix is inserted at given Cols with a gaussian shape. The raster_matrix has nrows as X direction. }
648
gpl-3.0
5f98c3b614f97398d9bd99b2dfd4409c3ba8900d
swarm-lab/Rvision
man/grabCut.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/transform.R \name{grabCut} \alias{grabCut} \title{Segmentation with GrabCut Algorithm} \usage{ grabCut( image, mask, rect = rep(1, 4), bgdModel, fgdModel, iter = 1, mode = "EVAL" ) } \arguments{ \item{image}{An 8-bit (8U), 3-channel \code{\link{Image}} object to segment.} \item{mask}{An 8-bit (8U), single-channel \code{\link{Image}} object. Each pixel can take any of the following 4 values: \itemize{ \item{0: }{an obvious background pixels.} \item{1: }{an obvious foreground (object) pixel.} \item{2: }{a possible background pixel.} \item{3: }{a possible foreground pixel.} }} \item{rect}{A vector defining the region of interest containing a segmented object. The pixels outside of the region of interest are marked as "obvious background". \code{rect} must be a 4-element numeric vector which elements correspond to - in this order - the x and y coordinates of the bottom left corner of the region of interest, and to its width and height. The parameter is only used when \code{mode="RECT"} (default: rep(1, 4)).} \item{bgdModel}{A 1x65, single-channel, 64-bit (64F) \code{\link{Image}} object to set and store the parameters of the background model.} \item{fgdModel}{A 1x65, single-channel, 64-bit (64F) \code{\link{Image}} object to set and store the parameters of the foreground model.} \item{iter}{Number of iterations (default: 1) the algorithm should make before returning the result. Note that the result can be refined with further calls with \code{mode="MASK"} or \code{mode="MASK"}.} \item{mode}{A character string indicating the operation mode of the function. It can be any of the following: \itemize{ \item{"RECT": }{The function initializes the state and the mask using the provided \code{rect}. After that it runs \code{iter} iterations of the algorithm.} \item{"MASK":}{The function initializes the state using the provided \code{mask}.} \item{"EVAL":}{The value means that the function should just resume.} \item{"FREEZE":}{The value means that the function should just run the grabCut algorithm (a single iteration) with the fixed model.} }} } \value{ This function returns nothing. It modifies in place \code{mask}, \code{bgdModel}, and \code{fgdModel}. } \description{ \code{grabCut} performs image segmentation (i.e., partition of the image into coherent regions) using the GrabCut method. } \examples{ balloon <- image(system.file("sample_img/balloon1.png", package = "Rvision")) mask <- zeros(nrow(balloon), ncol(balloon), 1) bgdModel <- zeros(1, 65, 1, "64F") fgdModel <- zeros(1, 65, 1, "64F") grabCut(balloon, mask, c(290, 170, 160, 160), bgdModel, fgdModel, iter = 5, mode = "RECT") } \seealso{ \code{\link{Image}} } \author{ Simon Garnier, \email{[email protected]} }
2,842
gpl-3.0
b69f7417d50aee177fa5c43a407d8c652dc2a710
pietrofranceschi/LCMSdemo
man/ExWarp.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ptw.R \name{ExWarp} \alias{ExWarp} \title{ExWarp} \usage{ ExWarp() } \value{ an interactive demo on runnin gon the console. A web based shiny version of the demo is also available } \description{ Demo showing how parametric time warping (ptw) can be used to perform a simple form of retention time alignment } \examples{ }
403
gpl-3.0
e58b6ac8df6fff012865346e9e3e42b71dab55b4
basilrabi/mansched
man/validEmpStatus.Rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/data.R \docType{data} \name{validEmpStatus} \alias{validEmpStatus} \title{Valid employment \code{status}} \format{ character vector } \usage{ validEmpStatus } \description{ A character vector containing the valid employment status in Taganito Mine. } \keyword{datasets}
348
gpl-3.0
d92529fb60863392760ce51e2b9728e360dd0f99
ecjbosu/fSEAL
PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.AcarSim.Rd
\name{chart.AcarSim} \alias{chart.AcarSim} \title{Acar-Shane Maximum Loss Plot} \usage{ chart.AcarSim(R) } \arguments{ \item{R}{an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns} } \description{ To get some insight on the relationships between maximum drawdown per unit of volatility and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. We have simulated cash flows over a period of 36 monthly returns and measured maximum drawdown for varied levels of annualised return divided by volatility varying from minus \emph{two to two} by step of \emph{0.1} . The process has been repeated \bold{six thousand times}. } \details{ Unfortunately, there is no \bold{analytical formulae} to establish the maximum drawdown properties under the random walk assumption. We should note first that due to its definition, the maximum drawdown divided by volatility is an only function of the ratio mean divided by volatility. \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma = F(\mu/\sigma)} Where j varies from 1 to n ,which is the number of drawdown's in simulation } \examples{ require(PerformanceAnalytics) library(PerformanceAnalytics) data(edhec) chart.AcarSim(edhec) } \author{ Shubhankit Mohan } \references{ Maximum Loss and Maximum Drawdown in Financial Markets,\emph{International Conference Sponsored by BNP and Imperial College on: Forecasting Financial Markets, London, United Kingdom, May 1997} \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} } \keyword{Drawdown} \keyword{Loss} \keyword{Maximum} \keyword{Simulated}
1,693
gpl-2.0
8fc8325bfc84a3276085a91ca2676ac48d7bab57
cxxr-devel/cxxr-svn-mirror
src/library/Recommended/rpart/man/rpart.Rd
\name{rpart} \alias{rpart} %\alias{rpartcallback} \title{ Recursive Partitioning and Regression Trees } \description{ Fit a \code{rpart} model } \usage{ rpart(formula, data, weights, subset, na.action = na.rpart, method, model = FALSE, x = FALSE, y = TRUE, parms, control, cost, \dots) } \arguments{ \item{formula}{a \link{formula}, with a response but no interaction terms. If this a a data frome, that is taken as the model frame (see \code{\link{model.frame}).} } \item{data}{an optional data frame in which to interpret the variables named in the formula.} \item{weights}{optional case weights.} \item{subset}{optional expression saying that only a subset of the rows of the data should be used in the fit.} \item{na.action}{the default action deletes all observations for which \code{y} is missing, but keeps those in which one or more predictors are missing.} \item{method}{one of \code{"anova"}, \code{"poisson"}, \code{"class"} or \code{"exp"}. If \code{method} is missing then the routine tries to make an intelligent guess. If \code{y} is a survival object, then \code{method = "exp"} is assumed, if \code{y} has 2 columns then \code{method = "poisson"} is assumed, if \code{y} is a factor then \code{method = "class"} is assumed, otherwise \code{method = "anova"} is assumed. It is wisest to specify the method directly, especially as more criteria may added to the function in future. Alternatively, \code{method} can be a list of functions named \code{init}, \code{split} and \code{eval}. Examples are given in the file \file{tests/usersplits.R} in the sources, and in the vignettes \sQuote{User Written Split Functions}.} \item{model}{if logical: keep a copy of the model frame in the result? If the input value for \code{model} is a model frame (likely from an earlier call to the \code{rpart} function), then this frame is used rather than constructing new data.} \item{x}{keep a copy of the \code{x} matrix in the result.} \item{y}{keep a copy of the dependent variable in the result. If missing and \code{model} is supplied this defaults to \code{FALSE}.} \item{parms}{optional parameters for the splitting function.\cr Anova splitting has no parameters.\cr Poisson splitting has a single parameter, the coefficient of variation of the prior distribution on the rates. The default value is 1.\cr Exponential splitting has the same parameter as Poisson.\cr For classification splitting, the list can contain any of: the vector of prior probabilities (component \code{prior}), the loss matrix (component \code{loss}) or the splitting index (component \code{split}). The priors must be positive and sum to 1. The loss matrix must have zeros on the diagonal and positive off-diagonal elements. The splitting index can be \code{gini} or \code{information}. The default priors are proportional to the data counts, the losses default to 1, and the split defaults to \code{gini}.} \item{control}{a list of options that control details of the \code{rpart} algorithm. See \code{\link{rpart.control}}.} \item{cost}{a vector of non-negative costs, one for each variable in the model. Defaults to one for all variables. These are scalings to be applied when considering splits, so the improvement on splitting on a variable is divided by its cost in deciding which split to choose.} \item{\dots}{arguments to \code{\link{rpart.control}} may also be specified in the call to \code{rpart}. They are checked against the list of valid arguments.} } \details{ This differs from the \code{tree} function in S mainly in its handling of surrogate variables. In most details it follows Breiman \emph{et. al} (1984) quite closely. \R package \pkg{tree} provides a re-implementation of \code{tree}. } \value{ An object of class \code{rpart}. See \code{\link{rpart.object}}. } \references{ Breiman L., Friedman J. H., Olshen R. A., and Stone, C. J. (1984) \emph{Classification and Regression Trees.} Wadsworth. } \seealso{ \code{\link{rpart.control}}, \code{\link{rpart.object}}, \code{\link{summary.rpart}}, \code{\link{print.rpart}} } \examples{ fit <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis) fit2 <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis, parms = list(prior = c(.65,.35), split = "information")) fit3 <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis, control = rpart.control(cp = 0.05)) par(mfrow = c(1,2), xpd = NA) # otherwise on some devices the text is clipped plot(fit) text(fit, use.n = TRUE) plot(fit2) text(fit2, use.n = TRUE) } \keyword{tree}
4,799
gpl-2.0
8fc8325bfc84a3276085a91ca2676ac48d7bab57
andeek/rpart
man/rpart.Rd
\name{rpart} \alias{rpart} %\alias{rpartcallback} \title{ Recursive Partitioning and Regression Trees } \description{ Fit a \code{rpart} model } \usage{ rpart(formula, data, weights, subset, na.action = na.rpart, method, model = FALSE, x = FALSE, y = TRUE, parms, control, cost, \dots) } \arguments{ \item{formula}{a \link{formula}, with a response but no interaction terms. If this a a data frome, that is taken as the model frame (see \code{\link{model.frame}).} } \item{data}{an optional data frame in which to interpret the variables named in the formula.} \item{weights}{optional case weights.} \item{subset}{optional expression saying that only a subset of the rows of the data should be used in the fit.} \item{na.action}{the default action deletes all observations for which \code{y} is missing, but keeps those in which one or more predictors are missing.} \item{method}{one of \code{"anova"}, \code{"poisson"}, \code{"class"} or \code{"exp"}. If \code{method} is missing then the routine tries to make an intelligent guess. If \code{y} is a survival object, then \code{method = "exp"} is assumed, if \code{y} has 2 columns then \code{method = "poisson"} is assumed, if \code{y} is a factor then \code{method = "class"} is assumed, otherwise \code{method = "anova"} is assumed. It is wisest to specify the method directly, especially as more criteria may added to the function in future. Alternatively, \code{method} can be a list of functions named \code{init}, \code{split} and \code{eval}. Examples are given in the file \file{tests/usersplits.R} in the sources, and in the vignettes \sQuote{User Written Split Functions}.} \item{model}{if logical: keep a copy of the model frame in the result? If the input value for \code{model} is a model frame (likely from an earlier call to the \code{rpart} function), then this frame is used rather than constructing new data.} \item{x}{keep a copy of the \code{x} matrix in the result.} \item{y}{keep a copy of the dependent variable in the result. If missing and \code{model} is supplied this defaults to \code{FALSE}.} \item{parms}{optional parameters for the splitting function.\cr Anova splitting has no parameters.\cr Poisson splitting has a single parameter, the coefficient of variation of the prior distribution on the rates. The default value is 1.\cr Exponential splitting has the same parameter as Poisson.\cr For classification splitting, the list can contain any of: the vector of prior probabilities (component \code{prior}), the loss matrix (component \code{loss}) or the splitting index (component \code{split}). The priors must be positive and sum to 1. The loss matrix must have zeros on the diagonal and positive off-diagonal elements. The splitting index can be \code{gini} or \code{information}. The default priors are proportional to the data counts, the losses default to 1, and the split defaults to \code{gini}.} \item{control}{a list of options that control details of the \code{rpart} algorithm. See \code{\link{rpart.control}}.} \item{cost}{a vector of non-negative costs, one for each variable in the model. Defaults to one for all variables. These are scalings to be applied when considering splits, so the improvement on splitting on a variable is divided by its cost in deciding which split to choose.} \item{\dots}{arguments to \code{\link{rpart.control}} may also be specified in the call to \code{rpart}. They are checked against the list of valid arguments.} } \details{ This differs from the \code{tree} function in S mainly in its handling of surrogate variables. In most details it follows Breiman \emph{et. al} (1984) quite closely. \R package \pkg{tree} provides a re-implementation of \code{tree}. } \value{ An object of class \code{rpart}. See \code{\link{rpart.object}}. } \references{ Breiman L., Friedman J. H., Olshen R. A., and Stone, C. J. (1984) \emph{Classification and Regression Trees.} Wadsworth. } \seealso{ \code{\link{rpart.control}}, \code{\link{rpart.object}}, \code{\link{summary.rpart}}, \code{\link{print.rpart}} } \examples{ fit <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis) fit2 <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis, parms = list(prior = c(.65,.35), split = "information")) fit3 <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis, control = rpart.control(cp = 0.05)) par(mfrow = c(1,2), xpd = NA) # otherwise on some devices the text is clipped plot(fit) text(fit, use.n = TRUE) plot(fit2) text(fit2, use.n = TRUE) } \keyword{tree}
4,799
gpl-3.0
5c7a722c94c89cbdf1a654b846c02ca04fed9f1c
aviralg/R-dyntrace
src/library/stats/man/Hypergeometric.Rd
% File src/library/stats/man/Hypergeometric.Rd % Part of the R package, https://www.R-project.org % Copyright 1995-2016 R Core Team % Distributed under GPL 2 or later \name{Hypergeometric} \alias{Hypergeometric} \alias{dhyper} \alias{phyper} \alias{qhyper} \alias{rhyper} \title{The Hypergeometric Distribution} \description{ Density, distribution function, quantile function and random generation for the hypergeometric distribution. } \usage{ dhyper(x, m, n, k, log = FALSE) phyper(q, m, n, k, lower.tail = TRUE, log.p = FALSE) qhyper(p, m, n, k, lower.tail = TRUE, log.p = FALSE) rhyper(nn, m, n, k) } \arguments{ \item{x, q}{vector of quantiles representing the number of white balls drawn without replacement from an urn which contains both black and white balls.} \item{m}{the number of white balls in the urn.} \item{n}{the number of black balls in the urn.} \item{k}{the number of balls drawn from the urn.} \item{p}{probability, it must be between 0 and 1.} \item{nn}{number of observations. If \code{length(nn) > 1}, the length is taken to be the number required.} \item{log, log.p}{logical; if TRUE, probabilities p are given as log(p).} \item{lower.tail}{logical; if TRUE (default), probabilities are \eqn{P[X \le x]}, otherwise, \eqn{P[X > x]}.} } \value{ \code{dhyper} gives the density, \code{phyper} gives the distribution function, \code{qhyper} gives the quantile function, and \code{rhyper} generates random deviates. Invalid arguments will result in return value \code{NaN}, with a warning. The length of the result is determined by \code{n} for \code{rhyper}, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than \code{n} are recycled to the length of the result. Only the first elements of the logical arguments are used. } \details{ The hypergeometric distribution is used for sampling \emph{without} replacement. The density of this distribution with parameters \code{m}, \code{n} and \code{k} (named \eqn{Np}, \eqn{N-Np}, and \eqn{n}, respectively in the reference below) is given by \deqn{ p(x) = \left. {m \choose x}{n \choose k-x} \right/ {m+n \choose k}% }{p(x) = choose(m, x) choose(n, k-x) / choose(m+n, k)} for \eqn{x = 0, \ldots, k}. Note that \eqn{p(x)} is non-zero only for \eqn{\max(0, k-n) \le x \le \min(k, m)}{max(0, k-n) <= x <= min(k, m)}. With \eqn{p := m/(m+n)} (hence \eqn{Np = N \times p} in the reference's notation), the first two moments are mean \deqn{E[X] = \mu = k p} and variance \deqn{\mbox{Var}(X) = k p (1 - p) \frac{m+n-k}{m+n-1},}{% Var(X) = k p (1 - p) * (m+n-k)/(m+n-1),} which shows the closeness to the Binomial\eqn{(k,p)} (where the hypergeometric has smaller variance unless \eqn{k = 1}). The quantile is defined as the smallest value \eqn{x} such that \eqn{F(x) \ge p}, where \eqn{F} is the distribution function. If one of \eqn{m, n, k}, exceeds \code{\link{.Machine}$integer.max}, currently the equivalent of \code{qhyper(runif(nn), m,n,k)} is used, when a binomial approximation may be considerably more efficient. } \source{ \code{dhyper} computes via binomial probabilities, using code contributed by Catherine Loader (see \code{\link{dbinom}}). \code{phyper} is based on calculating \code{dhyper} and \code{phyper(...)/dhyper(...)} (as a summation), based on ideas of Ian Smith and Morten Welinder. \code{qhyper} is based on inversion. \code{rhyper} is based on a corrected version of Kachitvichyanukul, V. and Schmeiser, B. (1985). Computer generation of hypergeometric random variates. \emph{Journal of Statistical Computation and Simulation}, \bold{22}, 127--145. } \references{ Johnson, N. L., Kotz, S., and Kemp, A. W. (1992) \emph{Univariate Discrete Distributions}, Second Edition. New York: Wiley. } \seealso{ \link{Distributions} for other standard distributions. } \examples{ m <- 10; n <- 7; k <- 8 x <- 0:(k+1) rbind(phyper(x, m, n, k), dhyper(x, m, n, k)) all(phyper(x, m, n, k) == cumsum(dhyper(x, m, n, k))) # FALSE \donttest{## but error is very small: signif(phyper(x, m, n, k) - cumsum(dhyper(x, m, n, k)), digits = 3) }} \keyword{distribution}
4,269
gpl-2.0