The whole rhino situation is frustratingly simple. One of the most charismatic animals on the planet is heading towards extinction, and it is because of a single superstition of a bunch of wealthy guys in Vietnam. Even more frustratingly, these guys wouldn’t be worse off if they decided not to care about rhino horns any more – the demand is not driven by any life or death necessity, it is somehow arbitrary.

Compared to complex and gigantic screw ups such as climate change or habitat loss, the rhino problem is way simpler, better defined, and with the causes, actors, and mechanisms relatively well known. It seems so textbook-simple that we should be able to stop it much easier than climate change.

Simple in theory, but the reality is daunting. Armed guards, preventive sawing off the horns, infusing horns with pink poison, education, legal measures, fortifications, ex situ conservation. All of these measures seem reasonable, yet they’ve had questionable impact: the poaching rates continue to increase, steadily, surely, making rhinos rarer, driving prices high, increasing demand, increasing the pressure.

Perhaps my grandchildren will only know rhinos from pictures.

And if we can’t prevent something so simple as the rhino extinction, then how can we deal with something so complex as climate change?

Recently I brought up these issues at lunch, suggesting that there must be something that we can do – after all we are articulate scientists from a rich country, and we work at an institution with the mission to understand biodiversity loss, we are ecologists, zoologists. There must be something that we can do better than non-academics to help the rhino cause.

We promptly rejected the idea of writing a fake paper linking the use of rhino products to cancer and publishing it in some predatory open-access journal under fake names. Apart from being potentially discrediting for serious conservation science, it could be a perverse incentive for using rhino products as a weapon, again rising the price and demand.

In the end we did not come up with anything, so it was kind of depressing (as many of our lunch conversations about environmental issues are).

Then, later, a colleague tried to cheer me up, suggesting that if it really bothers me, then I should try to get in touch with some NGO that deals with the rhino problem – surely there must be some. So I went online and I found www.savetherhino.org, and I realized that perhaps the most rational action I can take to help the rhino is to support an initiative like this. And since one of the founding members was Douglas Adams, they’ve got my heart.

For the start I am sending them some money.

]]>- 1 Objective
- 2 The data
- 3 Fixed-effects ANOVA in JAGS
- 4 Relaxing the assumption of constant variance
- 5 Conclusion

This post is inspired by a question by Dylan Craven that he raised during my Bayesian stats course.

My aim here is to demonstrate that, in Bayesian setting, one can make powerful inference about within-group ANOVA means even for groups with sample size , i.e. groups with only a single measurement. The main reason this is possible is the assumption of constant variance () among groups: groups with sufficiently large increase credibility of the estimate, which limits the credible interval of in the group with . Conversely, when the assumption is relaxed and is allowed to vary, groups with sample size of 1 are no longer that useful.

I will use modified artificially-generated data from the example from Marc Kery’s Introduction to WinBUGS for Ecologists, page 119 (Chapter 9 - ANOVA). The (artificial) data are supposed to describe snout-vent lengths in five populations of Smooth snake (*Coronella austriaca*); I modified the data so that **the first population is only represented by a single observation**, as indicated by the arrow in the figure below.

Loading the data from the web:

`snakes <- read.csv("http://www.petrkeil.com/wp-content/uploads/2017/02/snakes_lengths.csv")`

Plotting the data:

```
par(mfrow=c(1,2), mai=c(0.8,0.8, 0.1, 0.1))
plot(snout.vent ~ population, data=snakes, ylab="Length [cm]")
arrows(x1=1.2, y1=58, x0=2, y0=61, col="red", lwd=2, angle=25, length=0.2)
boxplot(snout.vent ~ population, data=snakes, ylab="Length [cm]", xlab="population", col="grey")
```

First, I will model the data using a traditional fixed-effect ANOVA model. For a given snake in population **the model** can be written as:

Where is a mean of -th population (), and identifies individual measurements.

I am interested in a simple question: **Is there a statistically significant difference between population 1 and 2?** Will I be able to infere the difference even when population 1 has a sample size of 1?

I will fit the model using MCMC in JAGS. Hence, I will prepare the data in a list format:

```
snake.data <- list(y=snakes$snout.vent,
x=snakes$population,
N=nrow(snakes),
N.pop=5)
```

Loading the library that enables R to communicate with JAGS:

`library(R2jags)`

JAGS Model definition:

```
cat("
model
{
# priors
sigma ~ dunif(0,100) # (you may want to use a more proper gamma prior)
tau <- 1/(sigma*sigma)
for(j in 1:N.pop)
{
mu[j] ~ dnorm(0, 0.0001)
}
# likelihood
for(i in 1:N)
{
y[i] ~ dnorm(mu[x[i]], tau)
}
# the difference between populations 1 and 2:
delta12 <- mu[1] - mu[2]
}
", file="fixed_anova.txt")
```

And here I fit the model:

```
model.fit.1 <- jags(data = snake.data,
model.file = "fixed_anova.txt",
parameters.to.save = c("mu", "delta12"),
n.chains = 3,
n.iter = 20000,
n.burnin = 10000,
DIC = FALSE)
```

```
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 41
## Unobserved stochastic nodes: 6
## Total graph size: 107
##
## Initializing model
```

Plotting parameter estimates with `mcmcplots`

:

```
library(mcmcplots)
plot(snakes$snout.vent ~ snakes$population, ylim=c(35, 65), col="grey",
xlab="Population", ylab="Body length")
caterplot(model.fit.1, parms="mu", horizontal=FALSE,
reorder=FALSE, add=TRUE, labels=FALSE, cex=2, col="red")
```

The red dots and bars show posterior densities of . Grey dots are the data.

**So what is the difference between population 1 and 2?**

```
denplot(model.fit.1, parms="delta12", style="plain", mar=c(5,5,2,1), main="",
xlab="Difference between means of populations 1 and 2",
ylab="Posterior density")
```

There is a clear non-zero difference (the posterior density does not overlap zero). So we can conclude that there is a statistically significant difference between populations 1 and 2, and we can conclude this despite having only for population 1.

We can have a look what happens when the assumption of constant is relaxed. In other words, now every population has its own , and the model is:

Here is JAGS definition of such model:

```
cat("
model
{
# priors
for(j in 1:N.pop)
{
mu[j] ~ dnorm(0, 0.0001)T(0,) # Note that I truncate the priors here
sigma[j] ~ dunif(0,100) # again, you may want to use proper gamma prior here
tau[j] <- 1/(sigma[j]*sigma[j])
}
# likelihood
for(i in 1:N)
{
y[i] ~ dnorm(mu[x[i]], tau[x[i]])
}
# the difference between populations 1 and 2:
delta12 <- mu[1] - mu[2]
}
", file="fixed_anova_relaxed.txt")
```

Let’s fit the model:

```
model.fit.2 <- jags(data = snake.data,
model.file = "fixed_anova_relaxed.txt",
parameters.to.save = c("mu", "delta12"),
n.chains = 3,
n.iter = 20000,
n.burnin = 10000,
DIC = FALSE)
```

```
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 41
## Unobserved stochastic nodes: 10
## Total graph size: 140
##
## Initializing model
```

Here are parameter estimates plotted with `mcmcplots`

```
library(mcmcplots)
plot(snakes$snout.vent ~ snakes$population, ylim=c(0, 100), col="grey",
xlab="Population", ylab="Body length")
caterplot(model.fit.2, parms="mu", horizontal=FALSE, reorder=FALSE, add=TRUE,
labels=FALSE, cex=2, col="red")
```

Clearly, the posterior density of is essentially the priror. In other words, the single observation has little power on its own to overdrive the prior.

We can plot the posterior density of the difference

```
denplot(model.fit.2, parms="delta12", style="plain", mar=c(5,5,2,1), main="",
xlab="Difference between means of populations 1 and 2",
ylab="Posterior density")
```

And we see that it is all over the place. Huge uncertainty, no clear inference is possible.

I have shown that one can make powerful inference about population means even if the sample size for the population is super small (e.g. 1). This is possible when all of the following conditions are met:

~~Bayesian approach to ANOVA must be used.~~- We must have sufficient data from other populations.
- We must be confident that the assumption of constant variance among the groups is justified.

The course is full. Here is syllabus with instructions. Complete raw codes (Markdown and R) and materials see the course's GitHub repository.

Introduction: Course contents, pros and cons of Bayes, necessary skills.

Normal distribution: Introducing likelihood on the Normal example.

Poisson distribution: Likelihood maximization. Probability mass function. AIC and deviance.

The Bayesian way and the principle of MCMC sampling.

Simplest model in JAGS: Estimating lambda parameter of a simple Poisson model.

Bayesian resources: Overview of software, books and on-line resources.

t-test: First model with 'effects', hypotheses testing, derived variables.

Linear regression part 1: Traditional GLM, MLE, and manual estimation.

Linear regression part 2: Bayesian version in JAGS, credible and prediction intervals.

ANOVA part 1: Model definition.

ANOVA part 2: Fixed vs. random-effect. Effect of small sample, shrinkage.

Site occupancy model: logistic regression, imperfect observation, latent variables.

Useful probability distributions (binomial, beta, gamma, multivariate normal, negative binomial).

Publishing Bayesian papers.

Concluding remarks.

- Kéry (2010)
*Introduction to WinBUGS for Ecologists*. Academic Press. - Bolker (2008)
*Ecological Models and Data in R*. Princeton Univ. Press. - McCarthy (2007)
*Bayesian Methods for Ecology*. Cambridge Univ. Press. - Gelman & Hill (2006)
*Data analysis using regression and multilevel/hierarchical models*. Cambridge.

- Lunn et al. (2013)
*The BUGS book: A practical Introduction to Bayesian Analysis*. CRC press. - Gelman et al. (2004)
*Bayesian Data Analysis*. Chapman & Hall. - Kruschke (2014)
*Doing Bayesian Data Analysis*. Academic Press.

- Clark (2007)
*Models for ecological data*. Princeton. - Kéry & Royle (2015)
*Applied hierarchical modelling in ecology*. Academic Press. - Royle & Dorazio (2009)
*Hierarchical modeling and inference in ecology*. Academic Press.

My take:

**Science rarely works with facts.**Surprisingly, the term fact almost never appears in peer-reviewed scientific publications. I don't hear it at seminars or conferences either. We scientists tend to avoid facts, it is not how we think about the world. Instead, we test hypotheses, compare models, build theories; we observe and we explore data; we make assumptions, assign probabilities; we falsify, reject, criticize. And we doubt. We seem reluctant to establish facts. Even in the realm of purely deductive logic (1 + 1 = 2) we rarely use the term fact – we simply describe the logical flow.**Facts are definitive, science is not.**Just recall Copernicus and Galilei – what was once a fact accepted by the entire academic community turned out to be a colossal mistake. Such stories are not uncommon, there have always been scientific revolutions, paradigm shifts, or just ordinary theory improvements. Science changes.On top of that, the world can be annoyingly stochastic and we have limited ability to observe it and experiment with it – hence, uncertainty creeps in. That is why we have probability theory, and why probability is so useful. Note: Even the language of law is careful not to be too definitive, and it implicitly works with probabilities: When a strong and solid evidence is presented, it proves*beyond reasonable doubt*, not definitively.**Observations can be facts, but what about the rest of science?**We can consider individual observations and measurements (the data) to be facts (see also this post). Although, there is the problem of measurement error and observational bias, this is a technical issue that can be addressed. In general, credible observations and measurements do exist, and it is reasonable to label some of them as facts. Yet observations and measurements are only the beginning of science, on their own they are of little use. To make observations useful we must use them to test hypotheses, compare models, assign probabilities, build theories, and make predictions. But speaking about facts in this hypothetical part of science is tricky.**Facts are either trivial, or too radical.**Let me elaborate on hypotheses and statements. Simple examples: „*Evolution happens*“, „*Climate changes*“. Since it is hard to imagine climate or living things to remain strictly constant through time, I'd put the probability of these statements being true, given all I've seen so far, to be way above 0.99999999999. In other words, I am happy to label a temporal change of almost anything as a fact, albeit a trivial one.Things get trickier when statements get specific. Example: „

*Our planet has warmed up during the last 100 years.*“ Is this a fact? The first complication is the multitude of meanings that hide beyond such a simple statement. How exactly do we measure global temperature? Where? And how do we aggregate it from local measurements to the global scale? Different approaches will give different quantitative results. Second, measurements are not always precise. Yet even if I consider the multitude of meanings and the measurement errors, and given all I've read so far, I'd assign the statement probability above 0.999: This is my subjective confidence about whether the planet has warmed up. It is quite high, so perhaps I might be tempted to speak about a fact. On the other hand, it is less of a fact than the previous statements. Also, this fact is politically irrelevant.How about a more relevant: „

*Humans have caused global warming by excessive exploitation of fossil fuels, and subsequent increased concentration of greenhouse gases in the atmosphere.*“ There are surely many assumptions, definitions, interpretations and imprecisions that will influence my confidence in this statement. Given what I know (I am an ecologist, not a climate expert), I would give the statement probability of 0.95. Now is this a fact? Would 0.85 also be a fact? Personally, I'd say that labelling this statement as fact is unnecessary radical.Finally, here is something from the core of the current political debate: „

*If all countries dramatically reduce greenhouse gas emissions right now, the climate change will slow down.*“ And here I have doubts, since this requires forecasting, and forecasts tend to be messy. I would lean towards thinking that the statement is somehow reasonable, so I give it probability of 0.7, yet I would definitely not label it as fact. If I were a politician, I would probably be willing to bet my career on it, but partly because I think that career choices are always a bit of a gamble, not because of a strong conviction (plus, I think that switching to clean energy could be fun since it would piss off Putin).

To summarize, science does not exactly need facts, there are always better words to be used. And in some cases science should actually avoid facts as something unhelpful or exaggerated. The language of probability is what we should use instead.

However, one argument for working with facts is that the general public doesn't care about how academics think – facts are a useful simplification for the masses, executives, and politicians. You scientists can do your smarty-pants gibberish, but you must also give us some hard facts that we can work with.

Thoughts on that: Asking scientists to provide facts is just shifting the responsibility for decisions from executives and politicians to scientists. Second, I doubt that labelling arguments as scientific facts makes them more persuasive – a discussion where one side claims to own the facts is, unfortunately, prone to end up as a fight. It is better to persuade with logos, ethos, and pathos, rather than with labels. Third, demagogues, ideologues and populists have their alternative facts; a considerable part of global population is willing to kill for all kinds of random bullshit, that is how certain they are about their facts. Feeding them stuff labelled as scientific facts is like pouring gasoline to fire.

I suggest a solution, although a time consuming one: We need to serve scientific method, critical thinking, and probability theory to masses in smaller doses, from an early age. If pupils are able to get algebra and reading, they are able to get probability. If high school students are able to read Shakespeare, they are able to understand what a peer-reviewed article is. Scientific method must be put on the same level as scientific findings, languages, history, math or literature. Scientist must do more to popularize not only their results (or facts), but also the way they work and think. It is the only way how the general public, and subsequently the politicians and decision makers, will ever take us seriously.

PS (27/2/2017): In response to this post, Andy Gonzales pointed me to this article in The New Yorker.

PPS (27/2/2017): Uncertainty does not end with probability. There can actually be uncertainty about the probability itself -- we can put credible intervals around probability: I can say that probability of something is 0.8, plus minus 0.1.

]]>

**Experimental macroecology needs better justification**

One entire morning was dedicated to experimental macroecology. Presented were results from small-grain manipulative experiments, sometimes replicated over large extents, sometimes not. However, it all felt like a session at a regular ecological (or botanical) meeting: I missed connections to large-scale biogeography and to the currently unresolved issues in macroecology.

One of the justifications for experimental macroecology was that it directly tests mechanisms, which can improve macroecological predictions. However, I haven’t seen a plausible demonstration that this really works, and I remain skeptical – I still believe that macroecological patterns emerge as a result of a whole array of idiosyncrasies, and I find it more useful to study large-scale (and especially large-grain) behavior of ecological systems using state variables and statistical models, rather than using reductionism and experiments.

I am not saying that experimental macroecology is bullshit. I just think that it needs to more directly demonstrate (theoretically and empirically) that it can actually deliver.

**Dynamic macroecology ****takes off, but it ****needs ****better terminology**

Another major block of talks involved dynamic macroecology. John Harte announced a new dynamic version of maximum entropy theory of ecology, there were talks on both present and deep time extinctions, on invasions, and on local-scale community dynamics. There was a lot of emphasis on processes and mechanisms.

In his earlier talk Brian McGill tried to clarify the role of *grain* and *extent* in experimental macroecology. This distinction is absolutely crucial for the progress of the field, and we need to get this right in order to understand each other. I suggest that similar service should be done to the terms *process* and *mechanism*, which are key for dynamic macroecology. We should learn the exact difference between processes and mechanisms; plus, when thinking along the temporal dimension, we should learn how processes and mechanisms link to concepts such as *pattern*, *event*, *causality*, *correlation, *and* drivers*.

There is always excitement when temporal macroecological data are presented – such data are rare and there is the promise that they will get us close to processes and mechanisms, which ecologists consider more important than patterns or correlations. But are they really?

To me, a* p**rocess* is, quite simply, a temporal change of any kind. A process can be deterministic or stochastic, it can involve causal chains but __it does not have to__. Moreover, having a process captured at a single location does not necessarily enable to map it in space (similarly, a spatial pattern may indicate little about the temporal process behind). Hence, I don’t see processes as more important or fundamental than, for example, static spatial gradients. They seem kind of similarly important. I also don’t see much of an added value in process-based models over statistical models.

This contrasts with *mechanisms *(“machines”), which somehow invoke the notion of causality – in a machine you push a piston, it turns the crankshaft, which turns the wheel, action, reaction. In ecology, a population hits carrying capacity, this reduces individual fecundity and increases mortality, and population growth slows down. Hence, mechanisms seem to have an added value over processes: mechanisms always involve causes and forces.

Yet even mechanisms are potentially treacherous: one person’s mechanism is another person’s correlation, or pattern. For example, macroecologists may perceive a tendency of a species to maintain viable populations only in a given temperature as a mechanism driving species' distribution, but for physiologists it’s a correlation (between temperature and population viability).

All in all, it’s a bit of a mess, and a clarification is due.

**Acknowledgements:** Some of the ideas presented here are based on discussions with David Currie, and with members of Center for Theoretical Study in Prague.

In the attic of my grandmother's house there is a box with my old personal stuff. Inside there is a smaller box with a bunch of 3.5 floppy disks. One of them has a fading handwritten label "*Bachelor thesis - data*".

To get to the data I could use the 3.5 drive in a 90's minitower resting in my wife's parents' cellar. The only problem is that it won't boot. Even if I fix that, will the 3.5 disk still be readable? Further, if my children, 20 years into the future, wanted to have a look at the data, will they even know what a minitower is? For them the 3.5 floppy disk will perhaps be as impenetrable as the Egyptian hieroglyphs were before the discovery of the Rosetta Stone.

I guess that almost all science that has ever been stored on floppy disks, magnetic tapes, or punched cards, is now practically inaccessible.

In this context, Jeremy Fox asks:

… in what concrete ways do you feel worse off because you can’t access the data that past ecologists stored in a now-unreadable format on 5.25″ inch floppies? Do you ever have occasion to curse their lack of foresight?

Well, I don’t feel worse off, and I don’t curse past ecologists for using floppies. I am ok today because most of the 20^{th} century’s science was actually printed, and hard copies have one advantage over binary data: they are readable without any special device – humans can access them directly. I can still get Taylor & Woiwod’s 1982 estimates of mean-variance scaling of insect populations from their grand table printed over several pages of Journal of Animal Ecology.

Today we consider electronic data to be somehow more secure and accessible than in the floppy nineties. We are so confident about the solidity of the cloud that we have been moving the entire scientific infrastructure to it. Countless papers are published, heaps of data are deposited online, new journals emerge, and it all exists solely as electromagnetic bits that degrade over time, or on flash drives that degrade over writing cycles. And in order to read the stuff, we always need a rather sophisticated device: a personal computer with up-to-date software.

But keeping software up-to-date can be a problem when hardware ages. Operating systems change, and there is no guarantee that our current hardware will be operable in 50 years. Example: Proprietary operating systems (such as MS Windows) tend to give up on old hardware support and do not enable downgrading to older versions. Further, file formats evolve. There was no .xlsx when I was a kid, now everybody uses it and nobody cares about .dbf. Although this does not seem as an issue now (we can still open .dbf with Excel and other programs), I can imagine that Excel2050 will drop the .dbf support because it will be of little commercial value, or just to make things simpler.

In the past, libraries received hard copies of the journals that they subscribed to, creating a globally distributed hard-copy backup of scientific knowledge, with no restriction on future use. The stuff was accessible even when the subscription ended. In contrast, the current trend is towards online access only; libraries pay for temporary access to remote repositories on publisher’s servers. When an institution can no longer afford access to Elsevier’s journals, everything is all of a sudden inaccessible.

It bothers me that nobody seems to care about what happens, in the long run, to all the electronic science produced today. Will my grandchildren be able to open the data that have been uploaded to the clouds? Will the clouds still be there? Will people have access to Wiley's online library or to JSTOR? Will these companies still exist? Will future generations be able to read .pdf or .xlsx formats? Will they, in a hypothetical post-nuclear (post-global warming, post-biodiversity loss) future have computers at all? And how about grandchildren of our grandchildren?

Empires fall, climate changes, catastrophes happen. That is the historical experience. Americans are just about to vote if they give access to their nuclear arsenal to a maniac. And the saying goes: Just because you’re paranoid doesn’t mean they aren’t after you. Sometimes I even imagine a planet-of-the-apes future where a laptop is found, covered with moss, mysterious, with whole libraries stored (but probably corrupted and fragmented) on a tiny little object inside, invisible and inaccessible.

Maybe I am exaggerating this. Jeremy Fox writes:

The “needs” and “wants” of long-distant future people aren’t well-defined today. What they “need”, and “want”, and what they do to meet those needs and wants, will depend on what we do today – but in complex and unpredictable ways. At the timescales you’re talking about, there’s nothing but Rosetta Stones. Nothing but sheer luck.

Perhaps. We can’t foresee the distant future. But I still believe that we can make some relatively small and easy precautionary steps, which can (with a bit of luck) help our grandchildren to access our science.

I suggest:

- To scientists: Use non-proprietary and simple text-based data formats (e.g. .csv) to deposit data. Use simple text-based file formats for writing (e.g. Markdown, LaTeX, html). Use free and open-source operating systems (e.g. GNU/Linux) and software (e.g. R).
- To journals, publishers, editors: Think twice before going 100% online. If that can’t be prevented, maybe print everything once a year annually in small numbers of prints, and send that as hard copies to certain libraries anyway.
- To funding agencies and governments: You often require that research funded with your money is open-access. How about adding another requirement: The research results should also be properly archived in some durable form. Somewhere.
- To libraries: I see two roles of a library – it enables access to literature, and it archives. The latter role can be fostered, and libraries can play an active role in archiving electronic literature that they subscribe to.
- To engineers: Please invent a cheap technology for storage of large amounts of data that does not degrade over time.
- To historians (and to Peter Turchin): Some ancient literature has still made it to the present. Please investigate why some of it survived, and why some of it hasn’t. Was it nothing but sheer luck? There can be some fundamental patterns and lessons back there.
- To artists: Please carve the Unified Neutral Theory of Biodiversity, Metabolic Theory of Ecology and Maximum Entropy Theory of Ecology to granite. Mount Rushmore could be a good location.
- To museums: Collect present-day computers.
- How about this, but for scientific data and literature?

*Motivation for this post came from an exchange under a post by Brian McGill on Dynamic Ecology.*

License: This is a public domain work. Feel free to do absolutely whatever you want with the code or the images, there are no restrictions on the use.

**Figure 1A:**

**Figure 1B:**

**Figure 2A:**

**Figure 2B:**

I haven’t found a simple ‘one-liner’ that’d do such plots in R. In fact, I have always found R’s treatment of logarithmic axes a bit dull - I want the fancy gridlines!

`loglog.plot`

To provide the log-linear gridlines and tickmarks, I have wrtitten function `loglog.plot`

.

**To load the function** from my GitHub repository:

`source("https://raw.githubusercontent.com/petrkeil/Blog/master/2016_07_05_Log_scales/loglogplot.r")`

**Arguments:**

`xlim, ylim`

- Numeric vectors of length 2, giving the x and y coordinates ranges on linear scale.

`xlog, ylog`

- Logical value indicating if x and y axes should be logarithmic (TRUE) or linear (FALSE). In case the linear scale is chosen, no gridlines are drawn.

`xbase, ybase`

- Base of the logarithm of the respective axes. Ignored if linear axis is specified.

`...`

- Further arguments to the generic R function `plot`

.

**Value:**

Empty R base graphics plot, ready to be populated using `lines`

, `points`

and alike.

Here I plot three power functions: one sub-linear (exponent = 0.8), one linear (exponent = 1) and one supra-linear (exponent = 1.2).

```
par(mfrow=c(1,2))
x <- seq(1, 1000, by=10)
# left panel - both axes linear
plot(x, x, ylim=c(0,4000))
points(x, x^0.8, col="blue")
points(x, x^1.2, col="red")
# right panel - loglog plot
loglog.plot(xlab="x", ylab="y", ylim=c(1, 10000))
points(log10(x), log10(x))
points(log10(x), 0.8*log10(x), col="blue")
points(log10(x), 1.2*log10(x), col="red")
```

In this example I plot a lognormal probability density function, and I only plot the tickmarks and gridlines along the x-axis. The y-axis is linear.

```
par(mfrow=c(1,2))
x <- 1:1000
# left panel - linear
plot(x, dlnorm(x, meanlog=4), ylim=c(0, 0.012),
col="red", ylab="probability density")
# right panel - loglog plot
loglog.plot(ylog=FALSE, ylim=c(0,0.012), xlim=c(0.1, 1000),
xlab="x", ylab="probability density", )
points(log10(x), dlnorm(x, meanlog=4), col="red")
```

// add bootstrap table styles to pandoc tables $(document).ready(function () { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); });

**The course is full.**

We are organizing a 3 day intensive course on open-source GIS high-performance analytical methods, with **Giuseppe Amatulli** (Yale University) as the main teacher, and **Petr Keil** (iDiv) as a teaching assistant. Date and place: 29 June - 1 July 2016, 'Red Queen' room, iDiv, Leipzig, Germany.

Over the past decade there has been an explosion in the availability of geographic data for environmental research, for both static and temporal analyses. Examples are remotely sensed data or large biodiversity databases. We are now able to tackle key ecological and environmental questions with unprecedented rigor and generality. Leveraging these data streams requires new tools and increasingly complex workflows. The course introduces a set of free and open source software (BASH, AWK ,GDAL, GRASS, R, Python, PKTOOLS, OFGT) for performing spatio-temporal analyses of big environmental data in Linux environment. We also introduce multi-core, cloud and cluster computation procedures.

See website of a sister course for more information.

Max 15

The course will take place in 'Red Queen' room at iDiv (Deutscher platz 5e, Leipzig, Deutschland).

The price is **350 EUR** per participant, and it covers: 20 hours of class training, student supervision, course auditing, course material, lectures, sample data and a USB flash drive inclusive of Linux Virtual Machine with all the software installed.

The price does not cover food and drinks -- the reason is that we prefer to offer a cheaper course with less catering, rather than fully catered but more expensive course (we'd have to raise the price to ~400 EUR to have catering).

]]>