2
0
Fork 0
CVE/CVE_C/man/cve.Rd

84 lines
3.0 KiB
R

% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/CVE.R
\name{cve}
\alias{cve}
\title{Conditional Variance Estimator (CVE).}
\usage{
cve(formula, data, method = "simple", max.dim = 10L, ...)
}
\arguments{
\item{formula}{an object of class \code{"formula"} which is a symbolic
description of the model to be fitted.}
\item{data}{an optional data frame, containing the data for the formula if
supplied.}
\item{method}{specifies the CVE method variation as one of
\itemize{
\item "simple" exact implementation as described in the paper listed
below.
\item "weighted" variation with addaptive weighting of slices.
}}
\item{...}{Parameters passed on to \code{cve.call}.}
}
\value{
an S3 object of class \code{cve} with components:
\describe{
\item{X}{Original training data,}
\item{Y}{Responce of original training data,}
\item{method}{Name of used method,}
\item{call}{the matched call,}
\item{res}{list of components \code{V, L, B, loss, h} and \code{k} for
each \eqn{k=min.dim,...,max.dim} (dimension).}
}
}
\description{
Conditional Variance Estimation (CVE) is a novel sufficient dimension
reduction (SDR) method for regressions satisfying \eqn{E(Y|X) = E(Y|B'X)},
where \eqn{B'X} is a lower dimensional projection of the predictors. CVE,
similarly to its main competitor, the mean average variance estimation
(MAVE), is not based on inverse regression, and does not require the
restrictive linearity and constant variance conditions of moment based SDR
methods. CVE is data-driven and applies to additive error regressions with
continuous predictors and link function. The effectiveness and accuracy of
CVE compared to MAVE and other SDR techniques is demonstrated in simulation
studies. CVE is shown to outperform MAVE in some model set-ups, while it
remains largely on par under most others.
Let \eqn{Y} be real denotes a univariate response and \eqn{X} a real
\eqn{p}-dimensional covariate vector. We assume that the dependence of
\eqn{Y} and \eqn{X} is modelled by
\deqn{Y = g(B'X) + \epsilon}
where \eqn{X} is independent of \eqn{\epsilon} with positive definite
variance-covariance matrix \eqn{Var(X) = \Sigma_X}. \eqn{\epsilon} is a mean
zero random variable with finite \eqn{Var(\epsilon) = E(\epsilon^2)}, \eqn{g}
is an unknown, continuous non-constant function,
and \eqn{B = (b_1, ..., b_k)} is
a real \eqn{p \times k}{p x k} of rank \eqn{k <= p}{k \leq p}.
Without loss of generality \eqn{B} is assumed to be orthonormal.
}
\examples{
# create dataset
x <- matrix(rnorm(400), 100, 4)
y <- x[, 1] + x[, 2] + as.matrix(rnorm(100))
# Call CVE.
dr <- cve(y ~ x)
# Call weighted CVE.
dr.weighted <- cve(y ~ x, method = "weighted")
# Training data responces of reduced data.
y.est <- directions(dr, 1)
# Extract SDR subspace basis of dimension 1.
B <- coef(dr.momentum, 1)
}
\references{
Fertl Lukas, Bura Efstathia. (2019), Conditional Variance
Estimation for Sufficient Dimension Reduction. Working Paper.
}
\seealso{
For a detailed description of \code{formula} see
\code{\link{formula}}.
}