Laser interferometric precision measurements, such as optical clocks or GWDs, have reached fundamental limits that require advanced concepts to overcome. New numerical methods for modeling the experiments need to be used to understand the technology at this extreme phase sensitivities. Modeling software always involved a compromise between speed and usability on one hand and accuracy and range of validity on the other hand. Thus when computer technology advances or the required precision of the models changes, the computer models should be adapted accordingly. We now face the challenge of implementing the effects of the quantum behaviour of light and the microscopic mirror surface changes into optical simulations. This prompts new understanding and re-implementation of numerical models and the physics algorithms on which these are based. At the same time advancement in parallel processing has promised fast processing on many cores in a desktop computer via GPU (Graphics Processing Unit) programming interfaces, opening new possibilities for higher precision without using super computers. While massive computing for simulating systems is routinely used in several fields of science and technology, in the specific field of optical interferometry this is not so frequent, mainly because powerful analytical methods exist. However dealing with real imperfect devices leads to approximations that are no more suitable. The simulation sub-programme is innovative in this field and its research addresses real practical problems in optics and computing.