A study published in Addiction Biology uses UK Biobank data to look at associations between electronic devices use and common mental traits.
Prof Kevin McConway, Emeritus Professor of Applied Statistics, The Open University, said:
“This is a problematic piece of research, in my view. I think there are several reasons why the data that were used cannot really answer most of the questions that the researchers were interested in, and some of the statistical analyses and interpretations depend on assumptions that seem to me not to be justified. Overall I don’t think that the findings support the suggestion that reducing time spent using electronic devices may help reduce mental health burdens. That suggestion might or might not be true, but I don’t think that this research can tell us either way.
“The data come from a major long-term study in the UK, UK Biobank, in which a lot of measurements and questionnaire results were made on, in total, over half a million volunteer participants. The data generally relate to health and genetics. Participants were aged 40 to 69 when they entered the study, so were middle-aged or elderly. The new research report correctly points out that the conclusions can’t therefore be applied to young people or groups with different ethnic composition. In fact a criticism sometimes also made of the UK Biobank data is that the participants were not even particularly representative of the UK population of the ages included, as you might expect in a group of people who volunteered to take part in a study related to health. In some contexts that may not matter, but I suspect that it does here, given that the main factors of interest (electronic media use and some psychological traits) have quite strong social aspects.
“But that’s not the biggest problem in terms of the analysis and interpretations, I’d say. The research is observational. That is, the original UK Biobank researchers did not seek to change what the participants would have done anyway (as would happen in some other kinds of research). They just measured and recorded what the participants did, or said they did, anyway. For looking for an association between, say, computer use and depression score, the problem is that there will be many differences between participants who use computers a lot and those who use computers just a little, apart from their computer use. So, if using computers a lot is associated in the data with higher levels of depression, we can’t say whether the computer use caused the differences in depression levels. The real cause of differences in depression levels could be one or more of the other differences between people who use computers a different amount – or (in this study) it could even be that differences in depression are causing the differences in computer use, rather than the other way round
“The first set of statistical analyses that the new researchers report, the associations shown in Figure 1 and Table 2 in the research paper, could well be affected by this issue that they can’t show what causes what. One way of trying to make more sense of this is to make statistical adjustments for factors that might differ between people with different levels of electronic media use and might also affect the measures of their psychological traits, and the researchers did make several such adjustments – but the adjusted results cannot by any means establish what causes what, because one can never tell whether all the relevant factors were adjusted for.
“These adjustments do make a difference to the findings, as shown in Table 2. In interpreting the results there (and elsewhere), the researchers largely concentrate on what are called the P values for the quantities involved, and do not comment on the signs (positive or negative) or the sizes of the estimated quantities (called Beta in the table). Basically, a positive Beta value is saying that the psychological measure involved tends to be higher in people who make greater media use. The researchers used three different statistical models, which adjusted for different factors. They do not comment on the fact that, for the model that adjusted for the most factors (model 2), the Beta estimates for anxiety score become negative, meaning that anxiety tends to be lower in people who make greater media use, when they were positive in the other models. This is unlikely to be an error – but it is pointing to the patterns of cause and effect being complicated (and impossible to unpick in these particular statistical analyses).
“Further, a particular issue in this study is that, if there is an association between media use and a psychological trait, and if we could magically tell that all other relevant factors have been taken into account, we still couldn’t tell whether the different amounts of media use were causing changes in the psychological traits, or the different psychological traits were causing changes in media use. That’s acknowledged by the researchers in their paper, and it arises because the electronic media use and the psychological measures were recorded at the same time. If instead they had been able to look at media use on one occasion, and then at depression, anxiety and so on at a later occasion, that might have helped a bit in making sense of cause and effect (though it certainly wouldn’t make that problem go away) – but that couldn’t be done because the researchers were using data already collected by a different team for the UK Biobank.
“One way of trying to deal with this issue of cause and effect is to carry out what’s called a mendelian randomization, which effectively gets round the problem by making use of the fact that people’s genetic makeup is determined at conception, at random from their parents’ genotypes. If certain assumptions can be made about the cause and effect relationships involved, that can sometimes allow an inference to be made about what’s causing what. The researchers on this new study did carry out mendelian randomization analyses, with the results shown in Table 3 in their paper, but I am far from being convinced that the necessary assumptions are valid. The analysis looks at an association between an exposure (here, one of the measures of electronic media use) and an outcome (here, one of the psychological measures). One assumption is that the genes involved in the analyses can affect the exposure (say, the amount of computer use), and that may well be the case here.
“But another assumption is that the genes involved can affect the outcome (say, the depression score) only through their effect on the exposure. That does seem unlikely to me, that these specific genes can affect a complex behaviour such as computer use but cannot affect depression or anxiety levels except through computer use. The researchers do report that they did some statistical tests to check this assumption, but I still have doubts on how plausible it is – and if it is not valid, then the mendelian randomizations don’t tell us anything useful about the existence or direction of a cause-and-effect association between the media use and the psychological measures.
“One minor technical issue connected with these analyses, that I’ll mention for the benefit of people who know the usual terminology for multivariable regression (a statistical method used for the analyses that produced Figure 1 and Table 2), is that the research paper uses the term “instrumental variables” for what are more usually called “independent variables” or “explanatory variables”. This appears in the Methods section describing these analyses, and it also appears in Table 3 about the mendelian randomization analysis. What could be particularly confusing here, to people who know the jargon, is that there are things called instrumental variables in a mendelian randomization, but (in that context) the term refers to the genetic variables, not to what an epidemiologist would call the exposure or the possible risk factor, which are called instrumental variables in Table 3.
“The final set of statistical analyses in the new research involves what is known as GWEIS, or genome-wide environment interaction studies. In the context of this research, that means that the researchers are looking for evidence of differences in the ways that genes affect the psychological measures, depending on how much use people make of electronic media. That’s quite a complex idea in terms of what might be causing what, particularly taken alongside the problems in looking at cause and effect that I have previously mentioned. Not much detail on the findings of this analysis is given in the research report, and given all the loose ends from the other analyses, I certainly don’t think that this one can tell us anything about exactly what is going on.”
‘Associations between electronic devices use and common mental traits: A gene–environment interaction model using the UK Biobank data’ by Jine Ye et al. was published in Addiction Biology at 5:01 UK time on Wednesday 8th December.
Prof Kevin McConway: “I am a Trustee of the SMC and a member of its Advisory Committee. My quote above is in my capacity as an independent professional statistician.”