Note: my question is about a similar issue to a previous (unanswered) post: Predictions from rma.mv in metafor not aligning with data distribution or BLUPs
I am getting puzzling results from a rma.mv analysis. When including a study that contributes information to multiple levels of a moderator, the rma.mv estimates are very different than the same analysis when excluding data from that study, even though the overall distribution of the data doesn’t change much. The values from the smaller dataset are more similar to what the unweighted mean would be. The issue seems to be related to the random effects because it disappears when analyzing the data without random effects.
Here is a subset of the data:
data.test<-read.table(header=T, text="Site.SubsiteName PubID es var Sample.Grazed
northwest 3 -28.36363636 28.375 both
northwest 3 8.726465954 2.912229617 both
northwest 3 -0.559242341 0.260933742 both
northwest 3 10.83938638 4.357505071 both
northwest 3 5.999445343 1.508319468 both
northwest 3 -0.559242341 0.260933742 both
northwest 3 9.942817844 3.706110978 both
Coyote 4 -3.08618206 0.799086757 both
Coyote 4 6.353381786 2.307223021 both
Coyote 4 -1.884433198 0.506983204 both
Coyote 4 3.188966721 0.830627019 both
Coyote 4 6.316544474 2.284399902 both
Coyote 4 6.525682696 2.415736792 both
Coyote 4 -0.633067236 0.352931374 both
Coyote 4 1.407325864 0.430183932 both
Coyote 4 9.49913955 4.7458007 grazed
Coyote 4 7.394539775 3.007172432 grazed
pasture 6 0.766204677 0.362041309 grazed
pasture 6 0.633042619 0.352929849 grazed
pasture 6 -1.004918868 0.382716049 grazed
pasture 6 0.844113092 0.368176229 grazed
pasture 6 -3.517124114 0.938239984 grazed
pasture 6 -5.155282263 1.63295777 grazed
pasture 7 -4.677643302 0.920085623 grazed
pasture 7 -3.412620272 0.606658049 grazed
pasture 8 -3.077892871 0.796588178 grazed
pasture 8 -1.491594237 0.442129693 grazed
Belmont 26 0.389082605 0.466338002 grazed
Belmont 26 0.389082605 0.466338002 grazed
Belmont 26 0.920136949 0.375434796 grazed
Belmont 26 0 0.618230951 grazed
Belmont 26 0 0.618230951 grazed
Belmont 26 0.620730881 0.415769353 grazed")
model.grazed<-rma.mv(data=data.test, yi=es, V=var, mods=~Sample.Grazed-1,
random = list(~1|Site.SubsiteName, ~1|PubID))
summary(model.grazed)
model.grazed2<-rma.mv(data=data.test[data.test$PubID!=4,], yi=es, V=var, mods=~Sample.Grazed-1, random = list(~1|Site.SubsiteName, ~1|PubID))
summary(model.grazed2)
(Note that the full dataset has 275 effect sizes. I also get a similar effect when using only one of the two random effects.)
The first model’s estimate for the “both” category is -3.3 (compared with an unweighted mean of 1.6); the second model’s estimate for the “both” category is 0.7 (compared with an unweighted mean of 0.9).
Why is there such a large difference? Is there a different way that I should specify the model/random effects to avoid this effect?