The help to the jagam() function states that you can use "s", "te", "ti" or "t2" splines. So that is probably why it does not work with "ns" splines.

One solution would be not to use JAGS, but STAN for that. There is a packages "brms" which allows you to fit GAMs with any kinds of splines in the Bayesian setting. It's great!

Cheers, Petr ]]>

Like:

jagam(y ~ x1 + ns(x2, df=7), family=possion) ]]>

Anyone knows any benchmarks difference? ]]>

I agree with you and usually try to have my scientific work in .csv and .rmd on CDs and DVDs. The most important things are printed. And of course good old field journals are safekeeped. Sometimes they are very helpful in current work. ]]>

I am struggling with this Autocorrelation tests: is there any suggestion to generate correlograms with specific lag distances (in meters) taken from some coordinates in the field? I have some points in a grid and I would like to plot the Moran index to test for autocorrelation (in a specific z variable, measured at each point). at 50, 150, and 200 meters radius in all points. Any suggestion? I don't know how to do. Thanks. ]]>

Writing manually takes a lot of time, but there

is tool for this time consuming task, search for:

ssundee advices unlimited content for your blog

If you want to know how to make extra bucks, search for: best adsense alternative Dracko's tricks ]]>

it took me a while to come back ...

Re: "even spread of x-values": It may well be that I am confounding things, but I stick to my claims. As you write yourself the scaling of variables (x and y) is a convention, not god-given. So transforming X is just rescaling to different units (e.g. from proton concentration to pH). Same goes for any other transformation (think Fahrenheit and Celsius, although those are linear transformation and will not change the distribution). So, if my X are (for whatever reason) not uniformly distributed, I can re-scale them to my convenience (e.g. square-root).

And, as you know as well as I, sampling cannot guarantee uniformity (as my example with island areas was supposed to indicate). Would you rather ignore all islands in an archipelago for which you have y-values just to make sure that the areas are uniform? Of course you (and I) would not.

You are of course completely right that such transformations change the relationship (from additive to multiplicative).

Re: "clumping does increase weights of points outside": It sure does. An example may say than 1000 words:

x <- c(1:10,42,35)

y <- c(9,9,10,7,7,7,7,6,7,5,33,40)

fm2 <- glm(y ~ log(x), family=poisson)

plot(x,y, las=1, pch=16, cex.lab=1.5, log="x")

influence.measures(fm2)

# or separately:

cooks.distance(fm2)

hatvalues(fm2)

Clearly the far-away points have a higher weight (e.g. hat value). That's all I meant. In a linear regression the line goes through the centre (\bar x, \bar y), and the further the distance of a data point from this point, the larger its leverage (sic!). Clumping moves the centre towards the cluster, giving more weight to out-of-cluster points.

Re: "regression trees are not parametric": People seem to use the word "non-parametric" in two instances (at least): 1. when the RESPONSE doesn't follow a specific distribution, and 2. when the model doesn't return "obvious" parameters. Trees do have parameters, so in my definition they could well be seen as parametric. Also, depending on which criterion you (not you really, but the algorithm) uses to decide on where to put the split, you can very well use a likelihood-based approach. If you think of CARTs with variance as criterion for splitting, you imply a normal distribution.

Also, I think even you would call a threshold-model a "model", and a CART is just a recursive threshold model. I share your unease with press-the-button-hey!-machine-learning-approaches, but I would not go so far to exclude them from the club of "models".

Thanks for writing your great blogs!

Carsten

]]>