Population models can help inform management decisions but like all models they do not perfectly reflect reality. A key challenge is to show that a model is sufficiently realistic to meet its intended purpose(s), ideally by comparing model predictions to trends observed in independent empirical datasets. Unfortunately, this is often not possible, which can lead to uncertainty about the reliability of models, and their use in the conservation-decision making process.
We will present work undertaken to validate a stochastic population model that was developed to predict responses of the native Golden Perch (Macquaria ambigua) to different flow management scenarios in the Murray and Murrumbidgee River catchments in the southern Murray-Darling Basin. We compared population model predictions (population size and growth rate, movement rate) to empirical datasets from across the study region that were independent of the model fitting process.
We found good alignment between model predictions and observed datasets, with the model generally predicting key trends observed in empirical datasets. Where misalignments between model predictions and empirical datasets were observed, these are likely due to factors not currently considered in the model construction (e.g. blackwater events and stocking), or mis-specified parameters (e.g. movement rates), which directly informs updates to the model structure. Some misalignments could also be due to factors causing variability in the empirical datasets, such as changes in fish detection as a function of changes in flow.
Model validation is critical but is rarely undertaken, often due to a lack of independent data sets to compare model predictions against. Our approach highlights the importance of model validation, how the outputs can be used to help understand when population models can be interpreted with more confidence or when more caution is needed, and how the model might be updated in the future.