DEPENDABILITY AND SECURITY SPECIFICATION : RISK-DRIVEN REQUIREMENTS

DEPENDABILITY AND SECURITY SPECIFICATION: 
RISK-DRIVEN REQUIREMENTS:
Risk-driven specification is an approach that has been widely used by safety- and security-critical systems developers. It focuses on those events that could cause the most damage or that are likely to occur frequently. Events that have only minor consequences or that are extremely rare may be ignored. In safety-critical systems, the risks are associated with hazards that can result in accidents; in security-critical systems, the risks come from insider and outsider attacks on a system that are intended to exploit possible vulnerabilities.

A general risk-driven specification process (Figure) involves understanding the risks faced by the system, discovering their root causes, and generating requirements to manage these risks. The stages in this process are:
1. Risk identification: Potential risks to the system are identified. These are dependent on the environment in which the system is to be used. Risks may arise from interactions between the system and rare conditions in its operating environment.
2. Risk analysis and classification: Each risk is considered separately. Those that are potentially serious and not implausible are selected for further analysis. At this stage, risks may be eliminated because they are unlikely to arise or because they cannot be detected by the software.
3. Risk decomposition: Each risk is analyzed to discover potential root causes of that risk. Root causes are the reasons why a system may fail. They may be software or hardware errors or inherent vulnerabilities that result from system design decisions.
4. Risk reduction: Proposals for ways in which the identified risks may be reduced or eliminated are made. These contribute to the system dependability requirements that define the defence against the risk and how the risk will be managed.

For large systems, risk analysis may be structured into phases, where each phase considers different types of risks:
1. Preliminary risk analysis, where major risks from the system’s environment are identified. These are independent from the technology used for system development. The aim of preliminary risk analysis is to develop an initial set of security and dependability requirements for the system.
2. Life-cycle risk analysis, which takes place during system development and which is mostly concerned with risks that arise from system design decisions. Different technologies and system architectures have their own associated risks. At this stage, you should extend the requirements to protect against these risks.
3. Operational risk analysis, which is concerned with the system user interface and risks from operator errors. Again, once decisions have been made on the user interface design, further protection requirements may have to be added.

 
| Copyright © SOUMYA SOURABHA PATNAIK