Palimpsest

Evolving notes, images and sounds by Luis Apiolaza

Page 2 of 71

Breeding trade-offs

On one side, it is obvious what we should do: increase any of the values in the numerator (selection intensity, accuracy and genetic variability) or reduce the denominator (how long it takes us to deliver gain). Any of those changes will increase genetic gain per year.

However, the world is full of trade-offs. First, that equation is for a single trait and our breeding programmes deal with multiple traits, so we are selecting on an index that combines the genetic information for all traits (their genetic variability, heritabilities, and correlations) with their relative economic value. Not all the traits have the same value for industry. And not all the traits cost the same to assess: measuring an external characteristic, say size, is a lot easier than measuring internal characteristics, say chemical composition.

Perhaps it is convenient to sacrifice accuracy, using a second- or third-best method for phenotyping, if we can assess more cheaply and quickly (increasing selection intensity). Perhaps it is convenient to clone our testing material (reducing effective population size), so we genotype once but test in multiple environments for multiple traits. Or we can redefine the traits, so we are not trying to predict a specific value but just check if we meet technical/quality thresholds.

There are many other options and that’s why the (more general version of the) breeder’s equation is central in what we do. It permits us to play with ideas, run alternatives and adapt our breeding programmes to whatever conditions we are facing. Sometimes it is super-duper high-throughput hyperspectral drone-enabled goodness. Sometimes is low-budget el-quicko back-of-a-workshop “appropriate” technology. Same equation, same decisions.

Start with the programming language and statistical approach used by your community

I have been very busy with the start of the semester, teaching regression modelling. The craziest thing was that the R installation was broken in the three computer labs I was allocated to use. It would not have been surprising if I were talking about Python ( šŸ¤£ ), but the installation script had a major bug. Argh!

Anyhow, I was talking with a student who was asking me why we were using R in the course (she already knew how to use Python). If you work in research for a while, particularly in statistics/data analysis, you are bound to bump onto long-lived discussions. It isn’t the Text Editor Wars nor the Operating Systems wars. I am referring to two questions that come up all the time in long threads:

  1. What language should I learn or use for my analyses?
  2. Should I be a Bayesian or a Frequentist? You are supposed to choose a statistical church.

The easy answer for the first one is “because I say so”: it’s my course. A longer answer is that a Domain Specific Language makes life a lot easier, as it is optimised to tasks performed in that domain. An even longer answer points to something deeper: a single language is never enough. My head plays images of Minitab, SAS, Genstat, Splus, R, ASReml, etc that I had to use at some point just to deal with statistics. Or Basic, Fortran, APL (crazy, I know), Python, Matlab, C++, etc that I had to use as more general languages at some point. The choice of language will depend on the problem and the community/colleagues you end up working with. Along your career you become a polyglot.

As an agnostic (in my good days) or an atheist (in my bad ones) I am not prone to join churches. In my research, I tend to use mostly frequentist stats (of the REML persuasion) but, sometimes, Bayesian approaches feel like the right framework. In most of my problems both schools tend to give the same, if not identical results.

I have chosen to be an interfaith polyglot.

The purpose of a system is what it does (POSIWID)

This is a popular* dictum by systems theorist Stafford Beer, pointing out that the self-described purpose of a system (or an organisation) is not the same as its actual purpose. I am often reminded of POSIWID when companies or universities state their “values” but then we contrast them with what they actually value, via their applications of carrots and sticks.

Famously, Google used “Don’t be evil” in their corporate code of conduct, but fired employees complaining about the ethics of their AI projects. Or your organisation states that employee wellbeing is a priority, but it uses an “ambulance at the bottom of a cliff” approach; there is no prevention, but instead you are told to use mindfulness and meditation to reduce stress.

I tend to be sceptical about people and organisations insisting too much on their values; I rather see their results, which tend to reflect their true purpose in what they do.

*Popular in the sense of nerd popular, not pop-star popular.

Note: In the early 1970s Stafford Beer was involved in the development of Cybersyn, an attempt to plan the whole Chilean economy from a room connected to industry via 500 telex machines. Replica of Cybersyn in Centro Cultural La Moneda, Santiago.

Having a peek at sheep breeding

One of the cool things about Quantitative Genetics is that it works everywhere. As a forester, I work with trees and my analyses reflect that, accounting for the biological constraints of our species (long-lived, usually, but not always, monoecious speciesā€”both sexes in the same individual), experimental designs (often incomplete-block), relatively shallow pedigrees (we started a few generations ago), etc.

However, as a Forestry undergrad I chose to take a Quantitative Genetics course in the Department of Animal Science at the Universidad de Chile. The examples used rabbits, sheep, etc. but the equations were directly applicable to trees. As a postgrad, I was, again, in the Department of Animal Science (at Massey this time) and the courses and discussions were mostly about cows. Unsurprisingly, the equations were directly applicable to trees.

Last week, I was fitting a multivariate animal-model BLUP with trees but, with small changes, you could use the code for cows, or rabbits, or wheat, or potatoes. This means that we, quantitative geneticists, get to be interested in the developments in other industries.

That was a long preamble! The thing is that I came across these article in Radio New Zealand: What’s the model sheep of the future? where there was a link to the nProve system “a free online tool for farmers wanting to identify breeders producing rams suitable for their own operation” developed by Beef+Lamb New Zealand. I HAD to look at nProve, of course, and there was one thing that really grabbed my attention: there is a very large number of traits that can be used to select rams, including multiple terminal indices, health indices, or just play directly with the breeding values for specific traits. There are regions in the country too.

It looks like a great tool to help farmers and I imagine that there must be substantial work communicating the tool to farmers. Just in case, here is a sort of equivalent tool for radiata pine in New Zealand: TopTree.

Questions while watching Netflix

In 2022 28% of New Zealand’s total exports and 53% of the forestry exports (by value) went to China. China’s population is predicted to fall by half (some would say crash) by 2100. How do we create new products and services, and target other markets to replace China?

The problem is not just China, but out of the top ten markets for NZ forestry (China, Australia, South Korea, Japan, United States, Indonesia, Taiwan, India, Thailand and Philippines) four have declining and ageing populations (China, South Korea, Japan and Taiwan). These four countries receive over three quarters of NZ forestry exports (76%).

Population is not the same as consumption, but they are associated. Some of these changes will be gradual, some abrupt, but we need to prepare as soon as possible.

Graph: total population predictions by the United Nations’ Department of Economic and Social Affairs.

« Older posts Newer posts »

© 2024 Palimpsest

Theme by Anders NorenUp ↑