What we really mean when we talk about 'vulnerability'

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
A diagram showing what we are looking for when trying to find vulnerabilities in various codebases from both perspectives of the model of LangSec (unexpected states, the circle on the left) and the model of data flows (tainted data flows, the circle on the right)

In a nutshell, the program/automaton(“the weird machine”) somehow gets to an unexpected state by itself through at least one unexpected state transformation, which is caused by a user-crafted data flow. The only way to get it done safely is to let your expectation precisely match the weird machine you actually implemented, by either expanding your expectation or diminishing the “weird part” in your automaton.

It's from a purely technical viewpoint and has nothing to do with some so-called “branches of security”, such as OpSec, Reverse Engineering, Forensics(without exploitation), Unauthorized access, or anything like that, which should be included in your expectation.

The point here is whether or not there's any *unexpected* and exploitable behaviour in the target automaton.

Just my humble opinion, feel free to correct me.