Stock assessments require a convoy of mathematically and realistically sound decisions to build a model that successfully represents the true condition of the fishery. The major decisions are often guided by three main types of complexity: knowledge complexity, mathematical complexity and computational complexity. Knowledge complexity refers to what we know about the biological systems governing the stock, for example, how species interact. Moreton Bay bugs feed on scallops but also become the primary target when scallop stocks are low. Quantifying and even qualifying these effects is difficult but understandably plays a major role in stock abundance. Mathematical complexity refers to the mathematical model itself and how difficult it is to build equations which accurately represent the true dynamics of the population. For example, most models assume a constant virgin biomass. However, environmental changes over the past century such as rising temperatures and changing currents undoubtedly affect the equilibrium abundance of fish stocks. Alongside this, data science is limited by computational resources. Incorporating additional data and discretising the temporal and spatial range into smaller grids, significantly increases the runtime of the model which increases the overall investment required in time and/or cost. So how do scientists choose which complexities are worth researching, worth investing in, worth modelling? How do we determine what ‘worth’ means? Is it accuracy of results? Is it capturing as many effects as possible? Is it statistical significance? Or is it a more holistic characterisation of risk? In this talk I will step through these considerations using Queensland Moreton Bay bug and saucer scallop populations as a case study, and outline how it is possible to develop a better understanding of the true trade-offs behind questions of data availability and model complexity.