Evolving notes, images and sounds by Luis Apiolaza

Category: bayesian (Page 1 of 3)

Start with the programming language and statistical approach used by your community

I have been very busy with the start of the semester, teaching regression modelling. The craziest thing was that the R installation was broken in the three computer labs I was allocated to use. It would not have been surprising if I were talking about Python ( 🤣 ), but the installation script had a major bug. Argh!

Anyhow, I was talking with a student who was asking me why we were using R in the course (she already knew how to use Python). If you work in research for a while, particularly in statistics/data analysis, you are bound to bump onto long-lived discussions. It isn’t the Text Editor Wars nor the Operating Systems wars. I am referring to two questions that come up all the time in long threads:

  1. What language should I learn or use for my analyses?
  2. Should I be a Bayesian or a Frequentist? You are supposed to choose a statistical church.

The easy answer for the first one is “because I say so”: it’s my course. A longer answer is that a Domain Specific Language makes life a lot easier, as it is optimised to tasks performed in that domain. An even longer answer points to something deeper: a single language is never enough. My head plays images of Minitab, SAS, Genstat, Splus, R, ASReml, etc that I had to use at some point just to deal with statistics. Or Basic, Fortran, APL (crazy, I know), Python, Matlab, C++, etc that I had to use as more general languages at some point. The choice of language will depend on the problem and the community/colleagues you end up working with. Along your career you become a polyglot.

As an agnostic (in my good days) or an atheist (in my bad ones) I am not prone to join churches. In my research, I tend to use mostly frequentist stats (of the REML persuasion) but, sometimes, Bayesian approaches feel like the right framework. In most of my problems both schools tend to give the same, if not identical results.

I have chosen to be an interfaith polyglot.

Cute Gibbs sampling for rounded observations

I was attending a course of Bayesian Statistics where this problem showed up:

There is a number of individuals, say 12, who take a pass/fail test 15 times. For each individual we have recorded the number of passes, which can go from 0 to 15. Because of confidentiality issues, we are presented with rounded-to-the-closest-multiple-of-3 data (\(\mathbf{R}\)). We are interested on estimating \(\theta\) of the Binomial distribution behind the data.

Rounding is probabilistic, with probability 2/3 if you are one count away from a multiple of 3 and probability 1/3 if the count is you are two counts away. Multiples of 3 are not rounded.

We can use Gibbs sampling to alternate between sampling the posterior for the unrounded \(\mathbf{Y}\) and \(\theta\). In the case of \(\mathbf{Y}\) I used:
Continue reading

Analyzing a simple experiment with heterogeneous variances using asreml, MCMCglmm and SAS

I was working with a small experiment which includes families from two Eucalyptus species and thought it would be nice to code a first analysis using alternative approaches. The experiment is a randomized complete block design, with species as fixed effect and family and block as a random effects, while the response variable is growth strain (in \( \mu \epsilon\)).

When looking at the trees one can see that the residual variances will be very different. In addition, the trees were growing in plastic bags laid out in rows (the blocks) and columns. Given that trees were growing in bags siting on flat terrain, most likely the row effects are zero.
Continue reading

INLA: Bayes goes to Norway

INLA is not the Norwegian answer to ABBA; that would probably be a-ha. INLA is the answer to ‘Why do I have enough time to cook a three-course meal while running MCMC analyses?”.

Integrated Nested Laplace Approximations (INLA) is based on direct numerical integration (rather than simulation as in MCMC) which, according to people ‘in the know’, allows:

  • the estimation of marginal posteriors for all parameters,
  • marginal posteriors for each random effect and
  • estimation of the posterior for linear combinations of random effects.

Continue reading

R, Julia and genome wide selection

— “You are a pussy” emailed my friend.
— “Sensu cat?” I replied.
— “No. Sensu chicken” blurbed my now ex-friend.

What was this about? He read my post on R, Julia and the shiny new thing, which prompted him to assume that I was the proverbial old dog unwilling (or was it unable?) to learn new tricks. (Incidentally, with friends like this who needs enemies? Hi, Gus.)
Continue reading

« Older posts

© 2024 Palimpsest

Theme by Anders NorenUp ↑