Modeling the Triad of User Behaviors in IT Security
Adi Marko – M.Sc. student
In the past, IT security was considered a technical issue (good algorithms and well-designed
systems will prevent security risks). Today, there is increasing awareness of the human as a key
determinant of the safety of IT systems. Security systems often mainly provide alerts, and rely on
the user to make decisions and to adjust system settings. Hence end-users’ behavior should be
considered when developing security systems.
An important aspect in this context are risk-related behaviors. These are not expressions of a
single tendency to take risks, but rather the combination of a number of different behaviors that
can change dynamically in response to changes in task requirements and the environment. One
model of these multiple behaviors is the Triad of Risk-related behavior (Ben-Asher, 2011). The
original study was empirical research in the context of a microworld. In this thesis I developed a
quantitative model of the three security behaviors, analyzed the relations between them, and
show how they effects on the security level of the system. In addition, I studied changes in the
predicted behavior as a function of risk attitude and the weighting of information from alerts.
The study provides some insights on the effects of different parameters on outcomes and on the
effects of deviations from optimality.
This work was performed under the supervision of Prof. Joachim Meyer
Effects of user characteristics and security alerts on cybersecurity behavior
Anna Morgenshtein-Sekeles
The rapid advances in computer technology cause organizations to depend on these
technologies for practically all corporate activities. Information systems and networks
process, store and transmit digital data. As organizations have come to depend on
information technology, the likelihood of malicious interference with the functioning
of these systems, unauthorized access to sensitive materials and other forms of
malicious activities greatly increased. These can cause major damages, as well as
indirect damage through the loss of reputation.
Threats are often introduced into systems as a result of user actions. Legitimate users
might perform actions such as browsing a malicious site, downloading a file with
malicious content from email or a website, connecting an infected external device to
the computer, etc. The user performs these actions for some purpose, but they may
result in undesirable outcomes, if hidden malware was unintentionally activated. In
many cases, especially in organizations, various security systems can notify the user
about potentially dangerous actions. System detection thresholds can be configured by
the organization according to the organizational information security policy.
The study examines the effects of the user’s ability to distinguish between threat and
no-threat events and the user’s risk aversion on users’ cyber security behavior with
security systems with different threshold settings and on the results of these
behaviors. A laboratory experiment was conducted in which participants had to
classify stimuli as threats or no threats, based on information they received about the
stimuli and on the output from an alerting system. The results were compared to the
predictions of a simulation model of user behavior in this context.
Results showed that user's knowledge significantly affects cyber security behavior.
Better knowledge reduces the probability of security breaches and improves the
quality of work. The availability of a security system reduces risky behavior, but less
than predicted. People use the information they receive from the system, but they do
not utilize it optimally and tend to give excessive weight to their own knowledge.
This work was performed under the supervision of Prof. Joachim Meyer