Evolving notes, images and sounds by Luis Apiolaza

Category: research (Page 5 of 8)

Early selection: how early is early enough? Part 2

In the previous post I mentioned that we wanted to screen trees for wood properties as early as possible, BUT there is a lot of “noise” with the mix of normal and reaction wood (compression in softwoods or tension in hardwoods). The main problems for running a glasshouse experiment were:

  • How to separate normal and reaction wood? Here the good-old-leaning-trees approach was handy.
  • Trees move a lot in real life, what’s the effect of thigmomorphogenesis (fancy name for response to movement)? How can we move them? Build a rocking machine: having good technicians help.
  • How good are the screening methods? Before embarking in a big experiment, better look first in a few clones with contrasting wood properties (4 Arborgen varieties). If that doesn’t work, pull the plug.

So we got a glasshouse with four clones, some ramets standing, some leaning and some rocking for eight months. Standing trees and rocking trees had random arcs of compression wood, but rocking reduced wood stiffness by 20%, which is similar to what happens to mature trees on the edge of stands. Leaning trees nicely separated normal and compression wood, which now could be analyzed separately; not only that, but they magnified the differences between the clones. TO BE CONTINUED.

Read more details here https://rdcu.be/donFJ

Luis A. Apiolaza, Brian Butterfield, Shakti S. Chauhan & John C. F. Walker. 2011. Characterization of mechanically perturbed young stems: can it be used for wood quality screening? Annals of Forest Science 68: 407–414.

Early selection: how early is early enough? Part 1

Different people write for different reasons. In my case, I write to remember how and why I did research with colleagues; and to share the results, of course. After working for a while in a topic, it is easy to forget how the whole project started, which is why I will write a few notes on early selection for wood properties. This is part 1.

Fifteen years ago, Prof Walker and I were chatting about reducing rotation age for radiata pine, which involved “fixing” corewood quality. Corewood—the first 10 rings or so in a tree—has high microfibril angle that leads to low stiffness and poor dimensional stability. But how early could we assess it?

Trees are large, heterogeneous and crazy expensive to measure. Could we instead measure small trees or, let’s be heroic, seedlings? One problem is that small trees generate random arcs of reaction wood (compression in softwoods, tension in hardwoods), creating a lot of “noise” in the assessments. Can we separate these types of wood to reduce or eliminate the noise? We set up a glasshouse experiment where we wanted to assess wood properties differences at 1 year of age. TO BE CONTINUED

Compression wood in leaning 8-month-old radiata pine.
Compression wood in leaning 8-month-old radiata pine (photo: Brian Butterfield).

Read more details here https://rdcu.be/donFJ

Luis A. Apiolaza, Brian Butterfield, Shakti S. Chauhan & John C. F. Walker. 2011. Characterization of mechanically perturbed young stems: can it be used for wood quality screening? Annals of Forest Science 68: 407–414.

Eucalypts essential oils

These days I deal mostly with quantitative genetics and wood properties. However, trees are much more than wood and one of our PhD students at the School of Forestry, University of Canterbury had a look at production of Eucalypts essential oils, particularly cineole.

One can easily see the seasonality of production (peak in Spring & Summer) and the difference between juvenile and mature foliage. Details available at:

Chamira Rajapaksha, Luis A. Apiolaza, Marie A. Squire and Clemens Altaner. 2023. Seasonal variation of yield and composition in extracts from immature and mature Eucalyptus bosistoana leaves. Flavour and Fragrance Journal 38(4): 293-300. Open Access at https://doi.org/10.1002/ffj.3742

Seasonality for cineole and total oil, plotted by family and leaf type.

Dropping predatory journals

Web of Science de-listed (stopped indexing) 82 journals because of essentially predatory practices, including some long-suspected publishers (like Hindawi with 15 journals) and more established publishers (like Routledge Journals, Taylor & Francis LTD with 4). A full list with details of the journals is available in this Google Sheet.

The original Clarivale (owners of Web of Science) post covers some more detail:

We have always been responsive to community and customer feedback when prioritizing which journals to re-evaluate. In recent months, we have taken additional proactive steps to counter the increasing threats to the integrity of the scholarly record. We have invested in a new, internally developed AI tool to help us identify outlier characteristics that indicate that a journal may no longer meet our quality criteria.

This technology has substantially improved our ability to identify and focus our re-evaluation efforts on journals of concern. At the start of the year, more than 500 journals were flagged. Our investigations are ongoing and thus far, more than 50 of the flagged journals have failed our quality criteria and have subsequently been de-listed.

Sometimes researchers knowingly choose to publish in predatory journals. Sometimes we do it by accident. One of my articles ended up in a predatory journal because the corresponding author got confused by the name, which was almost identical to a ‘proper’ journal. Shit happens, I got quite upset (a wasted paper), but shit happens.

Reviewing a manuscript in two hours

Today I declined to review a manuscript for a journal because the English language in the title and abstract, which is the only part I received, was quite poor. The manuscript sounded more or less interesting, but the time and effort to deal with it was something that I could not afford and simultaneously maintain my sanity.

I can spend, roughly, two hours in a review. There is too much going on in the world to spend longer than that in a low return activity. If I can’t finish it in two hours I will postpone it and the manuscript will quickly disappear under a pile of newer paper.

I see my job as judging the plausibility of the manuscript. Does it make sense? Can you even get that type of results coming from your data? Does it fit in the broader context?

  • I will not fix the writing: that’s not my job. A proper editor should fix that.
  • I don’t care about the format of the references and won’t check if all of them are in the list. That’s a job for editor/publisher. I have only 2 hours.
  • I won’t derive all the equations or check the computer code. I have only 2 hours.
  • I will check that you’re using the right or close-enough methods. I will point out when the methods are wrong or silly inefficient.
  • I won’t write huge lists of changes, but only the most relevant ones.
  • I’ll check if the conclusions make sense with respect to what you did.

As I have only 2 hours, I won’t take on a manuscript that requires fighting with the writing to figure out what’s going on. I will spend longer, some times much longer if I am reading and evaluating the work of students, but not for a random person in internet.

I don’t like the current publication system, which is a part of a larger system that feels like a pyramid scheme. The incentives are wrong, we are pushing people to publish too much, there are many more people trying to publish and their careers depend on making it in a system with a false sense of scarcity: there should be no more page limits for an “issue”.

Someone may say, but 2 hours is not long enough to “properly review” a manuscript. Well, these are my rules, if you don’t like them… tough luck.

« Older posts Newer posts »

© 2024 Palimpsest

Theme by Anders NorenUp ↑