CONFEST 2026

Speaker

Colin O'Halloran

Honourary Visiting Professor at the University of York, UK;
CTO of D-RiSQ Ltd, UK

Colin O'Halloran
Bio

Colin has over 40 years of experience in high integrity systems, operating as a scientist, consultant and business group manager with RSRE, DERA and QinetiQ. He is an honourary Visiting Professor at the University of York and was previously a full professor of computer science at the University of Oxford. Colin is an internationally recognised expert in the validation and verification of software-based systems using formal methods. At RSRE he developed parts of the MALPAS analysis tool that was used in the assessment of the secondary protection system of the Sizewell B nuclear power station. Within DERA he was the technical authority for the development of the formal semantics of SPARK 83 carried out by PVL Ltd. Within QinetiQ he led the team that developed tools to mathematically verify the flight and engine control software for the Typhoon aircraft as well as the Auto-throttle and Auto-pilot. In 1996 he served on the European Space Agency’s Board of Inquiry into the very first launch of Ariane 5 and compiled those parts of the report concerning the software’s contribution to the failure. He carried out a survey of certification approaches on behalf of the Director of Helicopter projects following the Chinook crash on the Mull of Kintyre. Since 2012 Colin has been CTO of D-RiSQ Ltd that have developed automated formal methods based mathematical tools covering requirements capture and design verification, backed up by source code and object code verification.

Reducing Time to Market through Formal Methods

Model Based Software Engineering (MBSE) integrates simulation, automation, and traceability into the development process. By using tools like Simulink and Stateflow developers can simulate system behaviour, automate code generation and maintain traceability from requirements to implementation. The major drawback with MBSE is showing not only that it does what is required, but much more importantly from a certification perspective, that it does not have unwanted functionality, including showing what happens when, not ‘if’ things go wrong. This challenge is caused by the semantics of the models being unclear, the link between the model and the requirements often being obscure and being able to show that code generated correctly implements the requirements for the model in an efficient and effective manner. The key is to leverage MBSE through evaluating verification artefacts generated by MBSE using Automated Reasoning. A key application of this approach to Robotic and Autonomous Systems is discussed in terms of commercial impact.
More Information:

 Overview