**Trend surface analysis** is a way to model the **spatial structure in species distributions** by **regressing occurrence data on the spatial coordinates** *x* and *y*, for a linear trend, and/or on** polynomial terms **of these coordinates (including the sums of powers and cross-products of the coordinates) for curvilinear trends (Legendre & Legendre, 1998; Borcard et al., 2011). Second- and third-degree polynomials are often used. The *multTSA* function below, which is **included in the ***fuzzySim* package, allows specifying the degree of the spatial polynomial to use. By **default**, it uses a **3rd-degree** polynomial, performs backward **stepwise AIC selection** of the polynomial terms to include, and returns the **presence probability** value resulting from the spatial trend, although these options can be changed — for example, **if you want the result to be a new spatial variable in the same scale of the input coordinates, you can set ***P = FALSE*. This function can be used for one or multiple species at a time, in a similar way as the *multGLM* function in the same package.

multTSA <- function (data, sp.cols, coord.cols, id.col = NULL, degree = 3,
step = TRUE, direction = "backward", P = TRUE, Favourability = FALSE, suffix = "_TS", save.models = FALSE)
{
start.time <- Sys.time() stopifnot(na.omit(as.matrix(data[, sp.cols])) %in% c(0, 1), length(sp.cols) > 0, length(sp.cols) <= ncol(data) -
length(coord.cols) - length(id.col), sp.cols %in% 1:ncol(data) |
sp.cols %in% colnames(data), is.null(coord.cols) | length(coord.cols) ==
2, is.null(coord.cols) | coord.cols %in% 1:ncol(data) |
coord.cols %in% colnames(data), is.numeric(as.matrix(data[,
coord.cols])), is.null(id.col) | id.col %in% 1:ncol(data) |
id.col %in% colnames(data), degree%%1 == 0, is.logical(step), is.logical(P),
is.logical(Favourability), is.logical(save.models))
coords.poly <- as.data.frame(poly(as.matrix(data[, coord.cols]),
degree = degree, raw = TRUE))
n.poly.terms <- ncol(coords.poly)
colnames(coords.poly) <- paste0(rep("poly", n.poly.terms),
c(1:n.poly.terms))
sp.data <- as.matrix(data[, sp.cols])
colnames(sp.data) <- colnames(data[, sp.cols])
n.subjects <- length(sp.cols)
if (save.models)
TSA.models <- vector("list", n.subjects)
subj.count <- 0
data.input <- data
data <- cbind(data, coords.poly)
for (s in 1:n.subjects) {
subj.count <- subj.count + 1
subj.name <- colnames(sp.data)[s]
message("Computing TSA ", subj.count, " of ", n.subjects,
" (", subj.name, ") ...")
model.formula <- as.formula(paste(subj.name, "~", paste(colnames(coords.poly),
collapse = "+")))
model.expr <- expression(with(data, glm(model.formula,
family = binomial, data = data)))
if (step)
model <- step(eval(model.expr), direction = direction, trace = 0)
else model <- eval(model.expr)
if (P || Favourability) pred <- predict(model, coords.poly, type = "response")
else pred <- predict(model, coords.poly)
if (Favourability) {
pred <- Fav(obs = sp.data[, s], pred = pred)
}
data[, ncol(data) + 1] <- pred
colnames(data)[ncol(data)] <- paste0(subj.name, suffix)
if (save.models) {
TSA.models[[subj.count]] <- model
names(TSA.models)[[subj.count]] <- subj.name
}
}
predictions <- data.frame(data[, id.col], data[, ((ncol(data.input) +
1 + n.poly.terms):ncol(data))])
if (!is.null(id.col)) {
if (is.character(id.col))
colnames(predictions)[1] <- id.col
else colnames(predictions)[1] <- colnames(data)[id.col]
}
message("Finished!")
timer(start.time)
if (save.models)
return(list(predictions = data.frame(predictions), TSA.models = TSA.models))
else return(predictions)
}

**References**

Borcard D., Gillet F. & Legendre P. (2011) *Numerical Ecology with R*. Springer, New York.

Legendre P. & Legendre L. (1998) *Numerical Ecology.* Elsevier, Amsterdam.