This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
In a nutshell, the program/automaton(“the weird machine”) somehow gets to an unexpected state by itself through at least one unexpected state transformation, which is caused by a user-crafted data flow. The only way to get it done safely is to let your expectation precisely match the weird machine you actually implemented, by either expanding your expectation or diminishing the “weird part” in your automaton.
It's from a purely technical viewpoint and has nothing to do with some so-called “branches of security”, such as OpSec, Reverse Engineering, Forensics(without exploitation), Unauthorized access, or anything like that, which should be included in your expectation.
The point here is whether or not there's any *unexpected* and exploitable behaviour in the target automaton.
Just my humble opinion, feel free to correct me.