Our Services


Part 1: Submit an example RFRM for a homeland security or intelligence topic of your choice (Drug Trafficking) Consider the example risk problem from the previous assignment for which you received feedback. Select 3 or 4 phases from RFRM for your example, tailor those phases to fit your problem, simplify and focus your problem as needed. Be thoughtful, but keep it simple!!! You may consider write 1/2-page example for each phase. You may make assumptions and introduce artificialities to make this assignment feasible. (Please document assumptions and artificialities.) You will be asked to review another students RFRM so that you can receive feedback from them on your example. Feel free to openly collaborate.

Discussion leading up to this is below:

Goal: to be able to tailor and apply each phase of the risk filtering, ranking, and management (RFRM) framework to begin with a fuzzy risk problem and build it into a specific risk analysis.

Risk Filtering, Ranking, and Management (RFRM) is a framework, which means that each phase must be tailored to the goals and objectives of the risk analysis.

You are already familiar with the phases. This section now focuses on the important aspects and features of RFRM that need to be tailored. Tailoring the process helps to improve the accuracy and defensibility, and also helps to accomplish the goals and objectives of your risk analysis. For each phase, we will examine some tips for tailoring, will give you a challenge question to apply the process, and then will give you some examples continuing the mass-casualty analysis that was initiated. As we go through the phase in more detail, pay special attention to the process of iteration in which you are digging into details and then ranking and filtering and then looking for additional details.

As discussed, every risk analysis begins with scenario structuring trying to answer the question, what can go wrong? This process can fill any time that is allotted. You must begin by timeboxing which means establishing the amount of time that you plan to spend developing structured scenarios. Each other phase of RFRM is a set of concepts and philosophies that must be tailored to create a strong risk management process and strategy.

RFRM Phase    Theory & Objective    Tailoring Needs    Application Example
1    Scenario Identification    You need to establish a foundation of “perils” that provide a granular definition of what risk is being studied. HHM does this by identifying perspectives (head-topics) and dimensions (sub-topics) of the system and from their identifying numerous scenarios, e.g., of the form of [trigger, event, consequence]. PHA (which includes HAZOP and FMEA) does this by decomposing a process into functional pieces, identifying possible failure modes of those functions. DFDs do this in cybersecurity by decomposing your network into “data resources” that store, process, transit data and their connection points, then you use something like STRIDE to produce an attack surface of exploitation scenarios.

Main point of Phase I: we need to break the system down into smaller pieces to understand how failure will happen and the types of damaging outcomes we might observe.   

– timebox (set an amount of time for headtopics, subtopics, then scenarios)

– structural constraining (set a number of headtopics, subtopics, and scenarios)

– iterations (separate rounds of engagement e.g., agree to perspectives/head-topics, then decomposed a subtopic at a time, then generate scenarios, then validate scenario generation)

– threaded ( separate into separate groups, each group does HHM independently, then combine)

2    Scenario Filtering Based on Scope, Time, and Level of Decision    Filtering based on Decision maker purpose, objectives, etc.
You need to filter out all risk scenarios or system features that are out of scope. This means clearly identifying your decision maker or stakeholders and establishing some constraints for what questions the risk study is trying to answer.

Main Point: This phase is used to scope the problem around a decision and a decision maker.   

  – Is it within the control/responsibility of the decision maker?

  – Top X  most important issues to the decision maker?

– Which scenarios (or sub-topics) are relevant to the  time domain of the decision maker?

– Top X least tolerable risks to the decision maker?

– (For example, at sub-topic or scenario-level: eliminate the least important/most tolerable issues by rounds; or, select the most important/least tolerable issues by rounds)

3    Bi-Criteria Filtering and Ranking    This phase is trying to rapidly decrease the risk scenarios to those are are “more likely” and “more damaging.” If you have already done Phases 1 and 2, then you have a better understanding of the problem. You first need to decide on a rubric that is meaningful to evaluate scenario. There should be a rubric for likelihood (e.g., a common understanding of what HIGH, MED, LOW means, or some key words like “hardly ever” “sometimes” “frequently). There should be a rubric for consequences (e.g., common understanding of “mission failure” “operation failure” “life lost” “inconvenient” “negligible”…. or whatever key words help a group to have a common understanding of a relative ranking). Finally, you need a way to decide on what combinations of likelihood and consequence will result in levels of risk and filtering-out. For example, if level of risk is directly connected to patching decision, you might say that some combinations of require you to “patch immediately”, while other combinations lead to “schedule patching”, and still others lead to “can ignore.” Or you might have simple descriptions of HIGH, MED, and LOW risk ratings.

Main Point: this stage is used to rapidly filter out risk scenarios that are both unlikely and inconsequential to focus the problem.

1. Clearly define rubric for likelihood. (For example, consider a cybersecurity example in which you ask questions such as: Can it be done with basic skills? Can it be done without special resources? Can it be done without preconditions (e.g., network access)? Can an adversary easily recognize opportunity?  These questions can be used for a clear categorization of likelihood.)

2. Define a clear rubric for consequence. (For example, minor expenses from a disruption incident, major incident expenses from a disruption, disruption results in interruption of revenue stream with incident expenses, chronic/sustained revenue losses, potential to divest business/product line)

3. Need a clear rubric for risk levels based on combination of likelihood rubric and consequences)

4    Multi-Criteria Evaluation    In this concept there are a set of typical system attributes that lead to robustness, resilience, and redundancy. This provides an opportunity to evaluate scenarios against these defeat measures (such as effects are irreversible, event is undetectable, damages will cascade, etc…). This can be tailored and used in multiple ways. For example, you could map these defeat measures to families of controls (e.g., something like traditional security evaluation), or you could ranking the scenarios based on which ones represent easier defeat of the system and use it as a surrogate for likelihood, or you could select a representative set of scenarios that are representative of fundamentally different defeat paths.

Main point: This phase helps to understand the mechanisms of failure for setting up modeling, identifying management activities, or defensibly focusing on a narrower set of risk scenarios.   

– score defeat paths as a multi-criteria scoring systems (e.g., 5 points to HIGHs, 3 points to MEDs, 1 point to LOWs), rate scenarios, and keep those above a threshold

– rate scenarios and assure that you have a mixture of scenarios that have defeat paths high for each category so that you can get representative scenarios

– rate scenarios and use an orthogonality metric (e.g., Gram-Schmidt) for a selection algorithm (e.g., start with most HIGHs, then select the most orthogonal that has the next most HIGHs, continue until orthogonality of remaining scenarios is too small)

– evaluate which defeat paths are most common to focus risk management (Phase VI) concepts

– establish the defeat patterns for quantitative risk modeling (Phase V)
5    Quantitative Ranking    You begin to introduce objective evidence, modeling, simulation, calculation. If you use experts, you need to calibrate them. You will need to have specific damage measures that are applicable to your problem (e.g., hours lost, lives lost, data lost, people impacted…) This is where you force a shift to verifiable concepts, such as probability, frequency … even if you are not correct it provides a common scale for comparison and forces conversations about justifiability and defensibility. You might also introduce measures of extreme events (for example conditional expected losses or worst case)

Main point: this phase focuses on driving evidence-based, objectively verifiable computation of risk in terms of damage and frequency.   

– make the rubric used in Phase 3 have quantifyable boundaries

– elicit ranges of probabilities (or frequencies) and consequences and plot as wisker-plots on a likelihood consequence scale

– elicit distributions and Monte Carlo simulate the damage measure and plot as a “probability of exceedance curve”

6    Risk Management    This is where you select alternative controls/mitigations/remediations and evaluate the tradeoffs between their application. You might use decision trees that enable you fold probabilistic outcomes onto decisions. You might use simulation and generate Pareto optimal curves. This will most likely be a phase in which you re-engage with the decision maker to understand their tolerance (how much uncertainty of damage is acceptable or tolerable), value (how much of one type of damage, e.g., budget, are they willing to give up for another type of damage, e.g., number of exploits per year in system)

Main point: this phase is about understanding the tradeoffs between alternative risk mitigations to communicate the key aspects of the problem to the decision maker.   

– do a “with and without” analysis (essentially redoing Phase 3, 4, or 5 “with” the risk management technique)

– decision tree analysis

– fault tree analysis with scenarios

– compare exceedance distributions

7    Safeguarding Against Missing Critical Items    Once you select and implement risk management, you have changed your system. There will be secondary and tertiary consequences. You need to run back through your key assumptions and conclusions to understand the sensitivity to things that you ignored, threw-out, or will change.

Main point: Managing risk changes the system and you need to iterate over what you know to assure that you haven’t discarded something that is potentially important   

– Whats changed? (back track to each of the scenarios that were filtered to identify what has changed and see if they escalate across the filter)

– Whats new? (Use the risk management has a head topic, develop subtopics and risk scenarios based on the scenarios and repeat your tailored process)

– What did we miss? (Evaluate filtered items to look for the next best scenario and evaluate whether it changes risk management recommendations)

– Sensitivity (Evaluate how sensitive the risk management recommendation is to various rubrics and assumptions)

8    Operational Feedback    
As you implement management, you have an opportunity to establish mechanism for information gathering, sensoring, evaluating, etc. In some problems, it becomes critical to continuously monitor the solution for effectiveness. In other problems, you want to establish a learning process for risks that were downgraded for lack of evidence to monitor for evidence that might prompt and update or shift in strategy.

Main point: this last phase/concept is to establish a rhythm for monitoring and gathering data about the things that are uncertainty. This is separated from risk management, because it is something that should be considered in every risk analysis (since every risk analysis is time-bound).

– Detectability analysis (review Phase IV for those items that are undetectable and evaluate what sensors would improve detectability)

– Cassandra risks (identify risks with long time horizons and develop pre-cursors that would provide early warning in case the risks matter)

– Pythia risks (identify risks with large uncertainty due to abstractions and develop ways to push for future analyses based on future investigations)

– Lean Monitoring (identify key sensitivities, assumptions, etc. and design tests and experiments that will help validate assumptions, reduce uncertainties, etc.)


You can place an order similar to this with us. You are assured of an authentic custom paper delivered within the given deadline besides our 24/7 customer support all through.


Latest completed orders:

Completed Orders
# Title Academic Level Subject Area # of Pages Paper Urgency
Copyright © 2016 Quality Research Papers All Rights Reserved