SlideShare une entreprise Scribd logo
1  sur  35
Télécharger pour lire hors ligne
COMPG007:	Operational	Risk	Measurement	for	Financial	Institutions	
Coursework	
	
	
Scenario	Models	and	Sensitivity	Analysis	
in	Operational	Risk	
	
	
Lecturer:	Dr	Ariane	Chapelle	
	
Team	Member:	Ruixin	Bao,	Yang	Li,	Hanlin	Yue	
	
2016.12
2
	
Content	
	
1.Introduction	...........................................................................................................	3	
1.1	Research	Objective	.........................................................................................	3	
1.2	Literature	Review	...........................................................................................	4	
1.3	Research	Procedure	.......................................................................................	4	
2.Scenarios	Generation	.............................................................................................	5	
2.1	Scenario	I	–	Asset	Misappropriation	...............................................................	5	
2.2	Scenario	II	–	Data	loss	by	Cyber	Attack	..........................................................	9	
2.3	Aggregated	Scenario	....................................................................................	11	
3.Sensitivity	Analysis	...............................................................................................	14	
3.1	Sensitivity	analysis	for	Scenario	I	..................................................................	14	
3.2	Sensitivity	analysis	for	Scenario	II	.................................................................	21	
3.3	Sensitivity	Analysis	for	Aggregated	Scenario	................................................	25	
4.Alternative	Adjustment	on	Loss	Measure	Quantile	.............................................	26	
4.1	Introduction	to	Cluster	Analysis	...................................................................	26	
4.2	Application	on	Adjustment	of	Scenario	Result	.............................................	27	
4.3	Important	Meaning	to	Loss	Measure	Quantile	............................................	27	
5.Conclusion	............................................................................................................	28	
5.1	Discussion	of	strategic	options	.....................................................................	28	
5.2	Limitation	and	Improvement	........................................................................	29	
6.Reference	.............................................................................................................	29	
7.Appendix	..............................................................................................................	30
3
1.Introduction	
The	purpose	of	this	paper	is	to	create,	analyse	and	generate	reliable	scenario	data	for	operational	
risk(OR)	 events	 in	 a	 bank	 and	 to	 provide	 efficient	 strategies	 regarding	 the	 improvement	 of	
operational	risk	management	in	order	to	assist	in	the	prevention	of	future	risks.	Since	the	scarce	
of	the	essential	data	in	these	events	with	‘high	severity	and	low	frequency’	when	aggregating	
bank’s	losses,	scenario	approach	is	most	appropriate	method	to	be	able	to	fill	the	gaps	of	our	total	
losses	 distribution,	 especially	 in	 the	 tail.	 Effective	 scenario	 modelling	 could	 help	 the	 financial	
institutions	to	understand	how	a	particular	operational	risk	event	happened,	what	cause	it,	and	
what’s	the	possible	impacts	of	it.	Scenario	sensitivity	analysis	could	also	help	the	decision	maker	
to	find	the	key	factors	when	the	loss	occurs	and	inspire	them	to	generate	most	efficient	controls	
to	prevent	their	institutions	from	future	losses.	
At	 this	 paper,	 we	 focus	 on	 modelling	 and	 sensitivity	 testing	 of	 two	 cases	 including	 asset	
misappropriation	 and	 cyber-attack	 since	 these	 two	 events	 donate	 huge	 contributions	 in	 loss	
distributions	in	a	bank.	Both	of	them	have	characteristics	like	high	severity	low	frequency,	which	
are	 obviously	 main	 targets	 of	 scenario	 analysis.	 Moreover,	 sensitivity	 analysis	 for	 these	 two	
scenarios	 and	 combined	 scenario	 also	 be	 used	 as	 the	 method	 to	 explore	 most	 sensitive	 and	
essential	risk	drivers.	Next,	cluster	method	is	applied	to	adjust	quantiles	by	grouping	data	into	
subsets	of	data	regarding	the	severity	of	OR	losses.	Based	on	the	result	we	have	obtained;	strategic	
options	can	be	provided	to	managers	in	the	future	operational	risk	management	as	for	these	two	
OR	events.	
1.1	Research	Objective	
As	far	as	we	know,	there	is	still	no	standard	method	for	scenario	generation	and	aggregation	since	
the	existence	of	differences	in	various	OR	events	and	business	environment.	Hence,	it’s	meaningful	
to	explore	the	more	efficient	process	and	methodology	at	this	section	aiming	to	support	decision	
makers	 by	 showing	 the	 sensitive	 factors	 at	 scenario	 cases	 and	 estimating	 the	 sufficient	 and	
appropriate	capital	requirement	for	preventing	the	bank	from	future	risks.	Here,	this	research	is	
to	apply	academic	concepts	and	methodologies	of	operational	risk	management	and	assessment	
especially	scenario	approach	into	the	realistic	case	in	a	bank.	The	result	of	this	research	can	be	
directly	 used	 in	 banks	 as	 the	 models	 to	 analyse	 their	 operational	 losses	 from	 asset	
misappropriation	 and	 cyber-attack.	 Based	 on	 scenario	 approach	 and	 cluster	 method,	 the	
appropriate	capital	requirement	can	be	calculated	as	operational	losses	in	the	following	years.	Of	
course,	some	additional	conditions	should	be	considered	every	 year	regarding	the	changes	of	
external	financial	environment	and	internal	business	structure.	We	do	believe	that	this	research	is	
applicable	 in	 current	 global	 financial	 circumstance	 and	 it	 could	 contribute	 on	 robustness	 of	
scenario	modelling	through	solid	considerations	of	details	in	this	event	and	target	organisation	
construction.
4
1.2	Literature	Review	
Academics	 and	 practitioners	 have	 proposed	 various	 multiple-scenario	 analyses	 to	 treat	
uncertainties	in	the	future	of	business	organizations	since	the	1970s	[14]
.	Since	the	external	local	
and	global	environment	are	laden	with	uncertain	changes,	it	is	difficult	to	detect	potential	trends.	
Hence	 scenario	 analysis	 is	 worth	 by	 advocating	 the	 generations	 of	 alternative	 pictures	 of	 the	
external	 environment’s	 future[2]
.	 There	 is	 no	 doubt	 that	 scenario	 analysis	 has	 increasing	
attractiveness	to	managers	[3][4]
.	Generating	scenarios	has	various	methodologies	which	can	be	
found	in	literature	[4-10]
.	 	
For	instance,	Ringland[10]
	illustrates	that	majority	of	companies	she	has	surveyed	apply	approach	
named	as	Pierre	Wack	Intuitive	Logics,	which	created	by	former	Shell	group	planner	Pierre	Wack.	
This	 approach	 focuses	 on	 constructing	 a	 comprehensible	 and	 credible	 set	 of	 situations	 of	 the	
forthcoming	to	test	business	plans	or	projects	as	a	‘wind	tunnel’	by	the	encouragement	of	public	
debate	or	improvement	of	coherence.	During	the	past	few	decades,	the	thinking	that	Shell	used	
to	deal	with	scenarios	has	spread	out	to	other	organizations	and	institutions	such	as	SRI	and	GBN	
[10]
.	Later,	this	Shell	approach	and	Godet’s	approach	are	compared	by	Barbieri	Masini	and	Medina	
Vasquez	[13]
.	 	
Ringland[10]
	 also	 introduces	 other	 organizations	 and	 their	 methods	 constructing	 scenarios	
including	‘Battelle	Institute	(BASICS),	the	Copenhagen	Institute	for	Future	Studies	(the	futures	
game),	the	European	Commission	(the	Shaping	Factors–Shaping	Actors),	the	French	School	(Godet	
approach:	 MICMAC),	 the	 Futures	 Group	 (the	 Fundamental	 Planning	 Method),	 Global	 Business	
Network	(scenario	development	by	using	Peter	Schwartz’s	methodology),	Northeast	Consulting	
Resources	(the	Future	Mapping	Method)	and	Stanford	Research	Institute	(Scenario-Based	Strategy	
Development)’.	In	this	paper,	scenario	process	is	adjusted	based	on	bank	structure,	target	events,	
and	all	above	the	previous	scenario	approaches	experiences.	
1.3	Research	Procedure	
The	research	process	is	based	on	the	basic	scenario	process	as	following	steps[2][11][12]
:	
Step	1:	Identify	focal	issues	for	our	bank	
Step	2:	Main	forces	in	the	local	circumstance	and	internal	and	external	business	environment	 	
Step	3:	Driving	key	risk	drivers	and	forces	
Step	4:	Ranking	factors	by	uncertainty	and	importance	 	
Step	5:	Drawing	scenarios	flowchart	in	reasonable	and	logical	way	
Step	6:	Materializing	the	scenarios	and	aggregating	scenarios	
Step	7:	Sensitivity	analysis	 	
Step	8:	Cluster	method	to	generate	
Step	9:	Implications	for	strategy	
Step	10:	Discuss	the	strategic	options	
Step	11:	Settle	the	implementation	plan
5
The	 objective	 is	 to	 observe	 and	 analyse	 sensitivities	 of	 scenario	 cases	 based	 on	 suitable	
assumptions	summarized	from	empirical	evidence.	The	Swiss	Cheese	Model	can	be	used	to	build	
scenario	 modelling	 after	 finding	 each	 events’	 exposures,	 occurrences,	 and	 impacts.	 Through	
Monto	Carlo	method,	the	loss	distributions	can	be	generated	during	a	year,	and	combined	scenario	
loss	distribution	can	be	obtained	through	aggregation	technique	as	the	benchmarking	of	capital	
requirement.	
In	this	paper,	two	individual	scenarios	and	one	combined	scenario	distributions	are	generated	for	
OR	events	asset	misappropriation	and	data	loss	from	cyber-attack.	After	inputting	the	necessary	
parameters	based	on	bank’s	information	and	experts’	opinions,	Monte	Carlos	simulation	is	used	
to	generate	the	VaR	in	each	scenario.	Next,	VaR	quantiles	can	be	correct	by	cluster	methodology	
to	produce	more	suitable	VaR	quantiles	based	on	the	severity	of	OR	losses.	Decision	makers	can	
cite	this	research	result	as	reliable	and	essential	suggestions	for	operational	risk	management	for	
their	bank.	
	
2.Scenarios	Generation	
2.1	Scenario	I	–	Asset	Misappropriation	
2.1.1	Asset	Misappropriation	definition	
Asset	misappropriation	fraud	is	the	asset	lost	if	people	who	are	entrusted	to	manage	the	assets	of	
organization	steal	from	it.	This	fraud	behavior	usually	happens	due	to	third	parties	or	employees	
in	 an	 organization	 abuse	 their	 position	 to	 obtain	 access	 for	 stealing	 cash,	 cash	 equivalents,	
company	data	or	intellectual	property,	which	are	vital	for	business	running	for	an	organization.	
Hence,	this	type	operational	risk	should	be	modelled	and	analysed	appropriately,	especially	under	
the	case	that	extremely	scarce	of	real	data	due	to	privacy	of	this	issue	and	stigma	of	organization	
and	negative	impact	of	public	image.	This	type	of	internal	fraud	can	attribute	to	company	directors,	
or	its	employees,	or	anyone	else	entrusted	to	hold	and	manage	the	assets	and	interests	of	an	
organization.	Modelling,	analysing,	and	discovering	the	most	efficient	scenario	methodology	is	the	
main	purpose	of	this	paper	in	order	to	obtain	a	deeper	understanding	of	this	kind	of	fraud	and	
provide	realistic	solving	methods	to	avoid,	stop	and	remedy	this	kind	of	issues.	 	
2.1.2	Scenario	Explanation	and	Assumptions	
Normally,	asset	misappropriation	fraud	can	be	the	fraudulent	behavior	including:	 	
i. Embezzlement	where	accounts	have	been	falsified	or	fake	invoices	have	been	made.	
ii. Deception	by	employees	inside	bank,	false	expense	statements	
iii. Payment	 frauds	 where	 payrolls	 have	 been	 fictive	 or	 diverted,	 or	 inexistent	 clients	 or	
employees	have	been	created.	
iv. Data	theft	
v. Intellectual	property	stealing
6
In	this	scenario,	the	target	object	is	the	asset	misappropriation	within	a	medium	size	bank	branch.	
Based	on	bank’s	basic	information	and	structure,	some	reasonable	assumptions	can	be	proposed	
at	this	stage	as	follows.	
	
• The	most	possible	assets	types	in	this	bank	can	be	stolen	cover	credit	notes,	vouchers,	
company	data	and	intellectual	property.	 	
• Bank	has	2000	employees,	and	we	could	simplifier	all	staff	into	5	different	types	positions	
including	head	of	a	bank	and	vice-presidents	(20)	with	10%,	managers	and	directors	(180)	
with	10%,	senior	analyst	(600)	with	5%,	junior	analyst	(1200)	with	5%	according	to	value	
of	access	they	hold	in	a	bank.	 	
• Generally,	 the	 average	 probability	 of	 internal	 fraud	 happens	 inside	 bank	 which	 is	 5%.	
Based	on	the	level	of	processes	and	internal	systems	and	controls,	this	probability	can	
move	on	or	down.	It	is	slightly	different	for	criminal	probability	in	different	levels	such	as	
the	 head	 of	 a	 bank	 and	 vice-presidents	 with	 10%	 criminal	 probability,	 managers	 and	
directors	with	10%,	senior	analyst	with	5%,	junior	analyst	with	5%	according	to	value	of	
access	they	hold	in	a	bank.	
• The	 amount	 of	 asset	 can	 be	 stolen	 are	 different	 with	 various	 positions	 and	 it	 can	 be	
measured	as	a	random	process	which	follows	normal	distributions	with	different	mean	
and	(variance).	For	instance,	head	of	a	bank	and	vice-presidents	steal	around	1000-unit	
asset	 with	 variance	 (300),	 managers	 and	 directors	 may	 access	 about	 100-unit	 with	
variance	(30),	senior	associates	can	control	nearly	20-unit	with	variance	(6),	and	junior	
analyst	only	could	obtain	near	10-unit	items	with	variance	(3).	 	 	
• If	 employees	 what	 to	 misappropriate	 bank’s	 asset	 under	 their	 authority,	 they	 could	
directly	access	certain	volume	such	as	head	of	a	bank	and	vice-presidents	(level	4)	could	
access	100%	amount	of	asset,	managers	and	directors	(level	3)	can	control	90%,	senior	
analyst	(level	2)	could	approach	75%,	and	junior	analyst	(level	1)	can	access	50%	according	
to	number	of	entrances	they	hold	in	a	bank.	
• if	an	employee	wants	to	embezzle	bank	assets,	this	employee	needs	permission	from	his	
or	her	superiors	to	complete	this	fraudulent	behaviour.	According	to	experts	within	this	
bank,	the	possibilities	that	superiors	are	cheated	successfully	through	fake	documents	
with	probability	50%	that	junior	analyst	obtains	permit	from	their	managers,	similarly	with	
probability	25%	managers	and	directors	could	fraud	successfully,	and	with	probability	10%	
that	head	and	vice-presidents	steal	assets	from	bank.	
• Regarding	to	the	level	of	employees,	the	severity	of	this	issue	can	be	measured	with	a	
bank	and	vice-presidents	 ×1,728,	managers	and	directors	 ×1.44,	senior	analyst	 ×1.2,	
and	junior	analyst	 ×1.	
	
Once	this	happens,	banks	should	adapt	immediate	reactions	and	report	it	into	action	fraud.	Since	
if	fraudsters	are	not	tackled,	these	opportunistic	one-off	frauds	can	become	systemic	and	spread	
out	within	bank	and	fraudsters	may	think	their	behaviors	are	acceptable,	which	forms	a	negative	
company	culture	of	theft	and	fraud.	 	
2.1.3	Asset	Misappropriation	Flowchart
7
In	this	scenario,	the	most	possible	missed	at	our	bank	under	asset	misappropriation	can	be	divide	
into	 four	 types	 such	 as	 credit	 notes,	 vouchers,	 bank	 data	 and	 intellectual	 property.	 All	 asset	
misappropriation	can	attribute	to	two	isolated	cases	involving	expense	fiddling	or	an	employee	
lying	about	his	or	her	qualifications	to	get	a	job.	In	this	case,	different	types	of	employees’	positions	
are	considered	as	different	occurrences	which	are	easy	to	calculate	the	total	loss	based	on	their	
level	of	access	and	value	of	assets	they	could	obtain.	At	the	end,	the	impact	can	be	used	to	calculate	
the	total	loss	as	the	following	formula.	Here,	we	measure	reputation	loss	based	on	severity	of	this	
event.	 	
𝑳𝒐𝒔𝒔 = 𝑽𝒍𝒐𝒔𝒔 ∗ 𝑽 𝒂𝒎𝒐𝒖𝒏𝒕 ∗ 𝑺𝒆𝒗𝒆𝒓𝒊𝒕𝒚	
	
After	analysing	exposure,	occurrence	and	impact	of	asset	misappropriation,	we	could	use	the	Swiss	
Cheese	Model	(Cumulative	Act	Effect)	to	apply	preventative	(P),	detective	(D),	and	corrective	(C)	
controls	 to	 reduce	 the	 possibility	 of	 this	 issue	 happens,	 control	 the	 effect	 of	 this	 event,	 and	
mitigate	the	consequences	of	this	event.	 	
Here,	 different	 controls	 can	 be	 initialized	 as	 the	 quantitative	 values	 according	 to	 the	 expert’s	
suggestions	and	historical	data	as	following:	
• P1:	Vet	employees	by	CV	and	references	could	reduce	initial	criminal	probability	
• P2	-	Implement	a	whistleblowing	policy	
• P3	-	Impose	clear	segregation	of	duties	
• P4	-	Control	access	to	buildings	and	systems	
• D1	-	Checking	invoices	and	related	documents	
• D2:	Internal	audit	could	detect	this	event	with	probability	98%.	
• C1:	The	insurance	proportions	are	different	for	various	level	of	employees	such	as	a	bank	
and	vice-presidents	 0%,	managers	and	directors	 70%,	senior	analyst	 50%,	and	junior	
analyst	 0%.	
• C2:	Tackle	relevant	employees	could	reduce	the	severity	of	this	issue	
	
Expusure
Credit	Notes	
Vouchers
Bank	Data	
Intellectual	
property
Occurrence
Head	and	
Vice-
presidents	
Managers	
and	Directors	
Senior	
Associate	
Junior	Analyst
Impact
Value	of	loss
Amount	of	
loss
Reputation	
loss
8
	
	
2.1.4	Result	
Let’s	apply	Monte	Carlo	to	simulate	this	scenario	in	order	to	obtain	reliable	data	to	analyse	this	
event.	For	making	sure	the	accuracy	of	the	result,	this	process	is	repeated	for	10000	times,	which	
shows	more	reasonable	and	realistic	results	compared	with	2000	times	and	5000	times.	
Inputting	all	the	parametrises	and	using	the	above	arithmetic	to	get	the	following	result	of	VaR	($):	
	
	
	
Plot	1:	Simulation	Result	of	Scenario	I	–	Asset	Misappropriation	
	
By	trying	to	apply	different	distribution	types	to	fit	our	data,	we	find	that	Generalized	Extreme	
Value	fits	data	very	well,	and	it	makes	senses	since	asset	misappropriation	can	be	treated	as	the	
extreme	events.	By	Extreme	Value	Theorem	(EVT),	Generalized	Extreme	Value	(GEV)	distribution	
P1:	Vet	
employees	by	CV	
and	references
P2:	Implement	a	
whistleblowing	
policy
P3:	Control	
access	to	
buildings	and	
systems
P4:	Impose	clear	
segregation	of	
duties
Scenario:	Asset	
Misappropriation
D1:	Checking	
invoices	and	
related	
documents	
D2:	Internal	
Audit
C1:	Insurance	
and	backup
C2:	Tackle	
relevant	
employees
25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28
9
is	a	normal	way	to	measure	tail	loss,	especially	for	scenario	case.	From	the	simulation	result,	we	
can	find	that	the	overall	VaR	distribution	is	roughly	a	lognormal	distribution,	which	might	fit	reality.	
We	can	treat	it	as	an	acceptable	result.	
Estimate	values	for	GEV	distribution’s	parameters,	mean,	and	variance	as	follows:	
Log	likelihood	 Mean	 Variance	 k	 sigma	 mu	
-112685	 44682.1	 Inf	 0.657664	 11172.5	 17407.7	
	
From	above	figure,	one	important	characteristic	of	asset	misappropriation	is	that	once	it	happens	
and	will	course	large	loss	for	a	bank.	Although	the	trust	between	bank	and	employees	is	essential,	
some	strategies	ought	to	be	adapted	to	stop	this	kind	of	issues	at	the	very	beginning	to	make	sure	
it	won’t	make	a	huge	impact	for	bank.	Generalized	Extreme	Value	Fitting	is	the	most	appropriate	
fitting	method	in	this	case.	Obviously,	this	figures	can	be	treated	as	Lognormal	distribution,	which	
makes	sense	in	real	life.	 	
2.2	Scenario	II	–	Data	loss	by	Cyber	Attack	
2.2.1	Significance	of	exploring	data	loss	by	cyber	attack	
Cyber-attacks	are	advanced	persistent	menaces,	which	target	company	secrets	in	order	to	can	cost	
companies	a	huge	amount	of	money	loss	and	could	even	put	them	out	of	business.	Therefore,	it’s	
valuable	to	model	and	analyse	the	loss	caused	by	cyber-attacks.	Normally,	hackers	infiltrate	an	
institution’s	system	out	of	one	of	two	aims:	cyber	espionage	or	data	sabotage.	In	this	scenario,	
data	sabotage	is	highlighted	especially	data	loss	caused	by	hacker’s	infiltrate	at	bank.	The	emphasis	
of	this	scenario	is	to	simulate	how	hackers	insinuate	into	bank’s	network	system	and	destroy	
essential	data,	and	what	detections	a	bank	could	apply	to	protect	their	data	and	minimize	losses.	 	 	
2.2.2	scenario	analysis	flowchart	
Assumptions:	
• The	total	volume	of	data	at	this	bank	is	10000	units	
• There	are	three	firewalls	at	this	bank	with	different	security	levels,	data	allocations,	and	data	
significance.	
• There	 are	 only	 two	 types	 of	 data	 including	 client’s	 information	 (50%)	 and	 management	
information	(50%).	Usually,	bank	has	backup	for	all	clients’	information,	but	sometimes	they	
may	forget	to	record	some	clients’	information	because	of	omitting	of	fulfill	in	backup	storage	
or	 negligence	 of	 related	 staff.	 Majority	 of	 management	 information	 may	 not	 be	 copied	 at	
backup.	
• Network	engineers	check	the	whole	system	once	an	hour,	however,	frequency	of	checking	can	
be	recognized	as	the	ability	of	engineers,	which	means	that	more	frequent	of	checking	more	
strong	capability	of	an	engineer	is.	At	here,	it	can	be	supposed	that	hackers	almost	surely	can	
be	found	if	they	infiltrate	at	the	same	time	that	engineers	check	system.
10
	
	
	
	
	
	
	
	
	
	
	
	
	 	 	
	
	
	
	
	
2.2.3	Scenario	process	
Based	on	assumptions	of	this	scenario,	Monte	Carlo	technique	is	applied	to	simulate	cyber-attacks	
during	a	year	and	generate	data	in	order	to	compute	VaR	(Value	at	Risk)	and	find	the	distribution	
of	loss.	For	making	sure	the	accuracy	of	this	model,	Monte	Carlo	was	repeated	10000	times.	 	
Let’s	start	with	a	hacker	tries	to	infiltrate	bank’s	system	and	hacker	needs	to	pass	three	firewalls	
with	different	security	levels,	data	value,	and	data	distributions	as	follows.	
a. Hackers	need	to	spend	5	minutes	to	infiltrate	the	first	firewall	and	obtain	5%	data	valued	10	
dollars	per	units,	however,	each	hackers	could	pass	first	firewall	with	probability	50%.	
b. Hackers	need	to	spend	15	minutes	to	infiltrate	the	second	firewall	and	obtain	10%	data	valued	
20	dollars	per	units,	and	each	hacker	could	pass	the	second	firewall	with	probability	25%.	 	
c. Hackers	need	to	spend	45	minutes	to	infiltrate	the	third	firewall	and	obtain	85%	data	valued	
50	dollars	per	units,	however	each	hacker	could	pass	first	firewall	with	probability	5%.	
After	passing	three	firewalls,	a	hacker	could	obtain	5%	data	per	minute	for	downloading	it	or	
destroying	 it.	 Once	 engineers	 check	 the	 system,	 hacker	 stops	 destroying	 data	 immediately.	
However,	the	data	has	been	destroyed	which	can’t	recover	immediately,	which	will	cause	direct	
loss	of	bank.	Hence,	the	loss	can	be	calculated	by	timing	time	to	detect	(Time),	data	value	(Vadata),	
and	data	volume	(Voldata).	 	
𝑳𝒐𝒔𝒔	 = 𝑻𝒊𝒎𝒆	×	𝑽𝒂 𝒅𝒂𝒕𝒂×	𝑽𝒐𝒍 𝒅𝒂𝒕𝒂	
2.2.4	Result	 	
Data	loss	under	Cyber-attacks	
	
Exposure	
Client’s	Information	
	
Management	information	
	
Impact	
	
PC.1	Firewall	1:	50%	
pass,	5%	data	vol	
Scenario:	
Cyber-attacks	
	 D.C.1	Engineers	
Value	
of	data	
	
Volume	
of	data	
	
Time	to	
detect	
	
PC.2	Firewall	2:	25%	
pass,10%	data	vol	
PC.3	Firewall	3:		
5%	pass,	85%	data	
D.C.2	Backup
11
By	running	Monte	Carlo	method	through	MatLab,	VaR	values	are	computed	for	different	quantiles,	
which	is	meaningful	to	provide	scenario	data	in	order	to	combine	it	with	internal	loss	data,	external	
loss	data	for	different	business	lines	at	bank.	Then	broad	operational	loss	at	bank	can	be	calculated.	 	
	 	 	
	
	
Plot	2:	Simulation	Result	of	Scenario	II	–	data	loss	by	cyber	attack	
	
After	 trying	 Lognormal,	 Generalized	 Lognormal,	 and	 Generalized	 Extreme	 Value	 (GEV)	
distributions	to	fit	our	data,	GEV	performs	well	in	this	cyber-attack	scenario.	The	following	result	
shows	the	fitting	of	GEV	distribution	for	our	scenario.	
From	the	simulation	result,	we	can	find	that	the	overall	VaR	distribution	is	roughly	a	lognormal	
distribution,	which	might	fit	reality.	We	can	treat	it	as	an	acceptable	result.	 	
Followings	are	the	value	for	parameters	for	fitting	GEV	distributions:	
Log	likelihood	 Mean	 Variance	 k	 sigma	 mu	
-103520	 32427.5	 6.81508e+07	 -0.0122104	 6538.51	 28731.5	
2.3	Aggregated	Scenario	
2.3.1	Meaning	of	Combination	of	Two	Scenarios	 	
Applying	our	scenario	data	with	an	aim	at	incorporation	into	capital,	aggregating	losses	of	these	
different	scenarios	is	the	key	part	for	obtaining	bank’s	total	operational	losses.	In	general,	all	80	
(10	event	types	X	8	business	lines)	operational	risk	categories	would	be	measured.	The	first	step	is	
to	consider	different	combinations	of	various	scenarios	by	using	dependency	graph	or	scenario	
correlation	matrix.	At	this	paper,	the	aggregation	of	these	two	scenarios	is	considered	by	using	
var-cov	matrix	method	since	asset	misappropriation	and	cyber-attack	are	the	key	operational	risk	
25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
26932.00	 	 31143.42	 	 36216.00	 	 48334.67	 	 59349.45	 	 76068.35
12
events.	 The	 objective	 is	 to	 explore	 the	 relationship	 between	 total	 loss	 distribution	 and	 two	
individual	loss	distribution	through	applying	scenario	aggregation	methodology.	By	focusing	on	
key	risk	exposures	and	assessing	the	dependencies	between	scenarios,	the	regulatory	capital	of	
both	events	can	be	calculated	to	meet	requirement	of	preventing	our	bank	from	operational	risk	
losses.	
2.3.2	Dependency	analysis	 	
The	interaction	part	of	these	two	scenarios	is	the	same	object	bank	data.	Considering	bank	data	
lost	by	cyber-attack,	this	may	be	caused	by	the	both	external	and	internal	fraudsters.	For	instance,	
some	internal	employees	may	sell	internal	access	of	essential	data	to	external	fraudsters	to	steal	
company	 assets.	 As	 for	 specifically	 interacted	 terms,	 two	 pairs	 are	 found	 as	 highly	 including	
dependent	potential	Criminal	in	Scenario	1	with	checking	frequency	in	scenario	2,	and	insurance	
and	backup	in	scenario	1	with	backup	in	scenario	2.	As	for	other	elements	in	both	scenarios,	they	
can	be	dealt	as	identically	independent,	since	the	correlations	between	them	can	be	ignored	out	
of	low	dependent	or	independent	relationships.	 	
For	our	aggregated	scenario,	the	connection	of	the	individual	scenario	is	the	correlated	parameters.	
From	 the	 previous	 parameters	 discussed	 above,	 it	 shows	 that	 the	 correlated	 parameter	 is	
following.	 	
	 Scenario	1	 Scenario	2	 Correlation	
A	 Probability	of	Potential	“Criminal”	in	P1	 Checking	Frequency	 High	
B	 Insurance	and	backup	proportion	in	C1	 Backup	Proportion	 Median	
For	pair	A,	the	probability	of	potential	criminal	reflects	the	overall	quality	level	of	the	employees,	
while	checking	frequency	reflects	the	technology	level	of	the	engineer.	Both	of	these	reflect	the	
quality	of	institution’s	employee.	
For	pair	B,	the	proportion	of	insurance	and	backup	in	scenario	1	include	the	backup	of	data.	Data	
also	could	be	important	asset	which	needs	to	be	protected.	So	the	backup	of	data	is	included	in	
both	 scenarios.	 Once	 the	 data	 in	 scenario	 2	 recover,	 part	 of	 C1	 also	 should	 be	 recovered	 (or	
insured).	
2.3.3	Aggregation	Method	
From	above	analysis,	two	scenarios	can	be	dealt	with	correlation	matrix	since	they	have	some	main	
factors	which	are	correlated	with	each	other.	However,	considering	the	several	parameters	used	
in	two	scenarios,	only	a	few	of	them	are	correlated.	The	correlated	relationship	is	not	that	obvious.	
Here	the	correlated	parameter	of	two	scenarios	can	be	simply	settled	as	0.3.	
By	var-cov	matrix	method,	the	following	formula	is	used	to	calculate	the	aggregated	loss.	
𝑋L
∙ Σ ∙ 𝑋	
Where	 𝑋	 is	 the	 vector	 of	 the	 loss,	 Σ	 is	 the	 correlated	 matrix.	 Then,	 we	 adjust	 this	 for	 two-
scenarios	situation.	The	formula	is	in	the	form	of	following.
13
𝐿PQPRS =
𝑆U
𝑆V
𝜌UU 𝜌UV
𝜌VU 𝜌VV
𝑆U 𝑆V
U
V
	
This	formula	is	given	in	the	‘’Milliman	Research	Report:	Aggregation	of	Risks	and	Allocation	of	
Capital”.[15]
	
Where	 𝑆U	 and	 𝑆V	 are	the	loss	from	Scenario	1	and	Scenario	2	respectively,	 	
and	 𝜌UV = 𝜌VU = 0.3	 	 	 resulting	from	experts’	opinions	or	historical	loss	distributions.	
	 	 	 	 𝜌UU = 𝜌VV = 1	 	 	 	 which	is	because	every	random	variable	is	completely	correlated	to	itself.	
2.3.4	Results	
Applying	Monte	Carlo	methodology	for	above-aggregated	scenario,	VaR	can	be	generated	after	
running	10000	times	M-C	methods.	The	algorithm	is	similar	to	scenario	1;	similarly,	GEV	fits	our	
data	well	in	this	section	since	it’s	still	the	combination	of	extreme	event	losses.	
	
	
	
Plot	3:	Simulation	Result	of	Combined	Scenarios	
	
Also,	GEV	performs	well	in	this	scenario.	Parameters,	mean,	and	variance	for	GEV	distribution	
are	estimated	as	follows:	
Log	likelihood	 Mean	 Variance	 k	 sigma	 mu	
-114376	 57520.1	 4.59793e+09	 0.423088	 14972.3	 38246.2	
Our	 finding	 is	 the	 following.	 Comparing	 three	 histogram	 plot,	 to	 get	 the	 distribution	 of	
aggregated	scenario,	the	distribution	of	scenario	1	shift	to	right	a	little	by	being	affected	by	
the	distribution	of	scenario	2.	
25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
33734.71	 	 43380.94	 	 63110.38	 	 140655.30	 	 235615.27	 	 333344.57
14
	
3.Sensitivity	Analysis	
Some	change	on	the	necessary	control	and	different	parametric	can	be	changed	to	observe	the	
impact	on	VaR.	Then	the	importance	of	these	control	methods	and	parametric	can	be	prioritised	
depending	on	assorted	VaR,	which	might	help	the	manager	to	have	a	good	control	on	the	risk	of	
relative	scenarios.	In	order	to	have	a	good	version	to	the	real	situation	of	loss,	here	we	recalculate	
25%VaR,	50%VaR,	75%VaR,	95%VaR,	99%VaR	and	99.9%VaR	to	compare	and	mainly	focus	on	
50%VaR	 and	 99.9%VaR	 This	 could	 help	 decision	 makers	 to	 understand	 the	 expected	 and	
unexpected	loss	level.	In	each	table,	the	gray	line	would	be	the	original	values	setting.	
3.1	Sensitivity	analysis	for	Scenario	I	
3.1.1	P1	-	Vet	employees	by	CV	and	references	
The	“Vet	employees	by	CV	and	references”	is	a	control	method	during	the	recruitment	process	
and	employee	training.	Here	we	set	a	probability	to	represent	the	probability	of	every	employee	
might	want	to	have	such	“criminal”	behavior.	Combined	with	the	overall	staff	number,	the	number	
of	potential	“criminal”	are	binomial	distribution.	Through	strict	recruitment	and	career	training,	
the	possibility	of	potential	‘theft’	could	decrease.	Here	we	adjust	this	value	and	get	the	following	
table.	
Probability	of	Potential	“Criminal”	 VaR	
Analyst	 Associate	 Directors	
Vice-
presidents	
25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
0.05	 0.05	 0.025	 0.025	 5331.20	 	 9679.31	 	 22463.20	 	 74212.06	 	 187905.30	 	 251974.99	 	
0.1	 0.1	 0.05	 0.05	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
0.2	 0.2	 0.1	 0.1	 31358.74	 	 47146.27	 	 75410.93	 	 182585.70	 	 268775.28	 	 432655.92	 	
0.3	 0.3	 0.15	 0.15	 49975.73	 	 73195.94	 	 108119.85	 	 227406.16	 	 317244.38	 	 432344.15	 	
0.1	 0.1	 0.05	 0.05	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
0.05	 0.1	 0.05	 0.05	 11915.89	 	 20135.48	 	 40328.27	 	 124702.50	 	 205327.38	 	 267028.34	 	
0.1	 0.05	 0.05	 0.05	 12742.31	 	 21427.96	 	 41775.47	 	 117471.95	 	 216375.04	 	 288731.09	 	
0.1	 0.1	 0.025	 0.05	 11695.91	 	 19724.78	 	 40345.43	 	 122966.52	 	 204077.74	 	 347431.20	 	
0.1	 0.1	 0.05	 0.025	 10769.72	 	 15193.14	 	 26137.63	 	 72223.46	 	 185688.72	 	 272214.32	 	
From	the	first	set	of	the	table,	it	can	be	detected	that	higher	probability	of	potential	“criminal”	
should	lead	to	more	loss.	For	the	second	set	of	the	table,	following	plot	can	illustrate	the	changes.
15
	
If	only	one	level	is	strictly	controlled,	the	loss	decreases	in	the	different	degree.	Both	on	expected	
loss	and	extreme	loss	point	of	view,	the	conclusion	is	obvious.	Strictly	control	the	“Head	and	Vice-
presidents”	level	from	asset	misappropriation	is	the	most	efficient	way	to	control	the	loss.	
3.1.2	P2	-	Implement	a	whistleblowing	policy	
In	“Implement	a	whistleblowing	policy”	control,	it	can	be	assumed	that	if	there	is	a	whistleblowing	
policy,	the	whistleblowing	could	only	happen	when	the	employee	has	access	to	the	relative	asset.	
This	should	make	sense	because	only	other	employee	who	have	the	same	access	level	can	disclose	
the	“criminal”.	To	make	the	model	clear,	setting	the	possibility	of	being	disclosed	by	the	same	level	
employee	 is	 0.5.	 Once	 being	 disclosed,	 the	 loss	 should	 be	 0.	 Then	 the	 loss	 can	 be	 compared	
between	with	and	without	this	control.	
Disclosed	
probability	
25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
No	Control	 19619.21	 	 29618.54	 	 53995.88	 	 178867.91	 	 246059.11	 	 371722.37	 	
0.25	 16779.74	 	 25913.66	 	 47422.27	 	 157204.22	 	 231346.46	 	 338667.38	 	
0.5	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
0.75	 10991.09	 	 18582.81	 	 36765.00	 	 82775.73	 	 179818.56	 	 256146.46	 	
From	the	table,	it	is	obvious	that	the	correlation	between	disclosed	probability	and	loss	is	negative.	
This	also	makes	sense	in	management,	which	is	whistleblowing	more,	loss	lower.	
3.1.3	P3	-	Impose	clear	segregation	of	duties	
In	corporation	management,	segregation	of	duties	is	always	necessary.	Considering	security	factor,	
the	employee	in	the	certain	department	should	have	no	access	to	the	asset	which	have	no	relation	
to	his	duty.	In	this	model,	if	this	“Impose	clear	segregation	of	duties”	exist,	every	employee	only	
has	access	to	80%	of	all	the	asset	at	his	access	level.	However,	the	top	level	is	not	affected	by	this	
control	condition.	 	
No	Level	
Control
Control	
Junior	
Analyst
Control	
Senior	
Associate
Control	
Managers	&	
Directors
Control	
Head	&	
Vice-
presidents
99.9%VaR 302527.28	 267028.34	 288731.09	 347431.20	 272214.32	
50%VaR 22268.45	 20135.48	 21427.96	 19724.78	 15193.14	
0.00	
80000.00	
160000.00	
240000.00	
320000.00	
400000.00	
10000.00	
13000.00	
16000.00	
19000.00	
22000.00	
25000.00
16
Trans-department	Asset	 25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
0.4	 12908.76	 	 21278.66	 	 41080.52	 	 117088.34	 	 210221.36	 	 301523.66	 	
0.6	 13351.52	 	 21782.61	 	 41466.08	 	 117592.98	 	 210578.02	 	 302022.50	 	
0.8	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
No	 Control	 14222.78	 	 22704.57	 	 42397.68	 	 118984.80	 	 211220.30	 	 302998.81	 	
	
From	the	plot,	having	control	on	trans-department	access	is	not	an	effective	way	for	prevent	huge	
loss.	And	it	has	some	effects	on	controlling	the	expected	loss.	
3.1.4	P4	-	Control	access	to	buildings	and	systems	
Controlling	access	is	a	common	way	both	for	corporation	management	and	security	in	modern	
business	management.	In	this	model,	all	employees	can	be	separated	into	4	level.	The	higher	level	
staff	have	more	access	and	the	value	of	the	asset	he	accesses	to	is	higher.	High-level	staff’s	access	
covers	low-level	staff’s.	However,	if	the	potential	“criminal”	staff	target	on	the	higher	level	assets	
which	he	has	no	access	to.	For	example,	to	do	this,	the	staff	need	to	get	the	permit	or	signature	
from	higher	level.	There	is	certain	possibility	to	get	higher	access.	Considering	the	universality	of	
this	control,	here	it	is	treated	as	a	necessary	way	for	protecting	asset	and	will	not	assume	this	
control	disappear.	However,	the	possibilities	of	getting	higher	access	are	adjusted	to	see	the	VaR	
changing.	
Lower	Access	Probability	 VaR	
1->2	 2->3	 3->4	 25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
0.5	 0.25	 0.1	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
0.25	 0.25	 0.1	 12751.61	 	 21241.60	 	 40958.12	 	 117211.00	 	 209653.17	 	 301148.42	 	
0.5	 0.125	 0.1	 13332.93	 	 21823.25	 	 41435.27	 	 117963.40	 	 210747.79	 	 302040.43	 	
0.5	 0.25	 0.05	 13672.26	 	 22050.67	 	 41792.86	 	 118314.18	 	 210907.22	 	 302527.28	 	
0.4 0.6 0.8 No	Control
99.9%VaR 301523.66	 302022.50	 302527.28	 302998.81	
50%VaR 21278.66	 21782.61	 22268.45	 22704.57	
300500.00	
301000.00	
301500.00	
302000.00	
302500.00	
303000.00	
303500.00	
20500.00	
21000.00	
21500.00	
22000.00	
22500.00	
23000.00
17
	
From	the	plot,	it	is	easy	to	observe	that	part	which	should	strictly	control	is	bottom	cross-level.	
Strictly	controlling	this	could	bring	down	the	loss	effectively.	In	other	words,	the	process	of	cross-
level	authorization	should	be	designed	well,	especially	on	the	bottom	level.	Besides,	authorization	
to	the	top	level	is	not	that	important	which	could	not	reduce	too	much	loss.	
2.1.5	D1	-	Checking	invoices	and	related	documents	
Once	 asset	 misappropriation	 happens,	 checking	 invoices	 and	 related	 documents	 also	 could	
prevent	loss.	For	example,	the	daily	or	momentary	review	could	find	out	the	unusual	situation.	
Once	discovery,	the	relative	account	can	be	locked	to	prevent	loss.	The	assumption	is	made	that	
asset	misappropriation	for	all	cross-level	misappropriation	might	be	checked.	The	probability	is	set	
as	0.5	if	asset	misappropriation	could	not	be	prevented	due	to	“checking	invoices	and	related	
documents”	control.	If	this	control	is	not	being	used	or	failure,	the	increasing	of	VaR	can	be	showed	
in	this	case.	 	
Prevent	
probability	
25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
0.25	 9323.02	 	 14158.38	 	 23769.44	 	 100787.35	 	 199441.42	 	 293501.83	 	
0.5	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
0.75	 18213.83	 	 30380.83	 	 60060.16	 	 144409.18	 	 225844.52	 	 323583.66	 	
No	Control	 22558.54	 	 38510.17	 	 78060.97	 	 170365.08	 	 250599.46	 	 343463.46	 	
The	prevent	probability	higher,	the	loss	higher.	It	can	be	described	as	higher	supervision,	lower	
loss.	
Or,	if	lighter	control	is	taken,	which	means	that	only	check	cross-level	misappropriation	is	checked,	
which	is	from	higher	level	to	lower	level,	or	from	lower	to	higher.	Two	results	can	be	compared	as	
follows.	 	
Base Control	1->2 Control	2->3 Control	3->4
99.9%VaR 302527.28	 301148.42	 302040.43	 302527.28	
50%VaR 22268.45	 21241.60	 21823.25	 22050.67	
300000.00	
300500.00	
301000.00	
301500.00	
302000.00	
302500.00	
303000.00	
20600.00	
20800.00	
21000.00	
21200.00	
21400.00	
21600.00	
21800.00	
22000.00	
22200.00	
22400.00
18
Check	
Direction	
25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
Both	 19619.21	 	 29618.54	 	 53995.88	 	 178867.91	 	 246059.11	 	 371722.37	 	
Low->High	 26331.34	 	 45449.64	 	 92723.68	 	 201669.75	 	 281214.65	 	 397574.95	 	
High->Low	 22251.52	 	 32529.17	 	 56893.23	 	 181279.66	 	 248473.46	 	 374311.24	 	
	
Here	it	can	be	saw	that	checking	invoices	which	from	high	level	to	low	level	has	the	similar	loss	
amount	with	checking	both	direction.	In	other	words,	checking	high	to	low	is	more	effective	and	
check	low	to	high	is	not	that	important.	This	might	because	many	loss	happens	when	the	high	level	
staff	misappropriate	low	level	asset.	Employee	 	
2.1.6	D2	-	Internal	Audit	
Different	from	the	previous	control,	internal	audit	only	occurs	at	fixed	time	point.	So	this	control	
cannot	prevent	all	the	loss	happen.	However,	it	can	prevent	some	loss	happen	or	reduce	some	
loss.	Here	setting	that	2%	of	loss	can	be	reduced.	
Prevent	
Loss	
25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
No	Control	 14064.39	 	 22722.91	 	 42805.75	 	 120798.74	 	 215211.45	 	 308701.30	 	
0.98	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
0.9	 12657.95	 	 20450.62	 	 38525.18	 	 108718.86	 	 193690.30	 	 277831.17	 	
0.8	 11251.51	 	 18178.33	 	 34244.60	 	 96638.99	 	 172169.16	 	 246961.04	 	
0.7	 9845.07	 	 15906.04	 	 29964.03	 	 84559.12	 	 150648.01	 	 216090.91	 	
This	is	also	a	basic	parameter.	The	higher	degree	of	strict	for	internal	audit	lead	to	lower	loss.	
2.1.7	C1	-	Insurance	and	backup	
Both Low->High High->Low
99.9%VaR 371722.37	 397574.95	 374311.24	
50%VaR 29618.54	 45449.64	 32529.17	
355000.00	
360000.00	
365000.00	
370000.00	
375000.00	
380000.00	
385000.00	
390000.00	
395000.00	
400000.00	
0.00	
5000.00	
10000.00	
15000.00	
20000.00	
25000.00	
30000.00	
35000.00	
40000.00	
45000.00	
50000.00
19
Once	loss	from	misappropriation	happens,	insurance	could	be	a	good	way	to	control	the	loss.	Or,	
some	asset	such	as	important	data	can	be	recovered	if	having	backup.	Here	it	can	be	settled	that	
only	asset	in	the	second	and	third	level	have	insurance	in	the	proportion	of	70%	and	50%.	The	
bottom	level	asset	has	low	value	and	are	cost-efficient	for	insurance.	The	top	level	asset	only	
assesses	 to	 top	 level	 staff	 and	 have	 high	 level	 of	 security.	 So	 still	 no	 insurance	 for	 this	 level.	
However,	the	proportion	of	insurance	can	be	altered	to	find	a	better	way	for	reducing	VaR.	
Insurance	Proportion	 VaR	
Level1	 Level2	 Level3	 Level4	 25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
No	Control	 23247.35	 	 33175.01	 	 52407.24	 	 127692.78	 	 221607.36	 	 315079.68	 	
0	 0	 0.7	 0.5	 15482.45	 	 19981.17	 	 29676.13	 	 67843.51	 	 114056.54	 	 160163.33	 	
0	 0.7	 0.5	 0	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
0.7	 0.5	 0	 0	 16836.87	 	 26886.46	 	 46009.92	 	 121173.33	 	 215548.58	 	 308304.32	 	
0.3	 0.3	 0.3	 0.3	 16273.14	 	 23222.51	 	 36685.07	 	 89384.95	 	 155125.15	 	 220555.78	 	
	
It	ought	to	be	assumed	that	the	overall	percentage	of	insurance	is	fixed.	By	comparing	the	different	
focus	point	for	the	insurance,	it	shows	that	the	expected	loss	is	low	when	insurance	focus	on	the	
top	level	asset.	This	make	sense	because	top	level	has	the	highest	value.	And	putting	insurance	on	
average	in	different	level	should	also	effectively	reduce	loss.	
2.1.8	C2	-	Tackle	relevant	employees	
After	asset	misappropriation	occurs,	tackle	relevant	employees.	Dismissal	or	firing	bills	might	be	
the	most	common	way	to	deal	with	these.	Once	need	to	tackle	relevant	employees	and	dismissal	
him,	the	loss	should	surpass	the	only	asset	losing.	Plus,	higher	level’s	dismissal	should	have	larger	
impact.	Therefore,	the	severity	index	can	be	set	for	different	level	to	show	the	extra	loss,	such	as	
loss	of	valuable	employees.	
Severity	Index	 VaR	
No	Control Insure	High
Insure	
Median
Insure	Low
Average	
Insure
99.9%VaR 315079.68	 160163.33	 302527.28	 308304.32	 220555.78	
50%VaR 33175.01	 19981.17	 22268.45	 26886.46	 23222.51	
0.00	
50000.00	
100000.00	
150000.00	
200000.00	
250000.00	
300000.00	
350000.00	
0.00	
5000.00	
10000.00	
15000.00	
20000.00	
25000.00	
30000.00	
35000.00
20
Level1	 Level2	 Level3	 Level4	 25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
1	 1	 1	 1	 10801.93	 15835.07	 27158.07	 71314.39	 125321.62	 178246.62	
1	 1.2	 1.44	 1.728	 13783.10	 22268.45	 41949.64	 118382.76	 210907.22	 302527.28	
1	 1.4	 1.96	 2.744	 17555.94	 30718.59	 62141.51	 183129.61	 330894.30	 475413.99	
1	 1.6	 2.56	 4.096	 22116.06	 41355.50	 88486.15	 269109.48	 489831.22	 704930.20	
This	is	also	common	parameter.	More	important	the	staff	is,	the	higher	loss	is.	
2.1.9	Which	is	the	best	control?	
Pick	partly	data	from	all	above	tables,	we	can	only	compare	the	VaR	with	or	without	certain	control.	
In	this	way,	the	control	method	can	be	considered	as	the	best	efficiency.	As	the	essential	part	of	
our	model,	control	P1,	P4	and	C2	are	retained,	which	are	also	unrealistic	if	deleting.	Here	is	our	
result	of	removing	control.	
Control	 25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
Origin	 13783.10	 	 22268.45	 	 41949.64	 	 118382.76	 	 210907.22	 	 302527.28	 	
No	P2	 19619.21	 	 29618.54	 	 53995.88	 	 178867.91	 	 246059.11	 	 371722.37	 	
No	P3	 14222.78	 	 22704.57	 	 42397.68	 	 118984.80	 	 211220.30	 	 302998.81	 	
No	D1	 22558.54	 	 38510.17	 	 78060.97	 	 170365.08	 	 250599.46	 	 343463.46	 	
No	D2	 14064.39	 	 22722.91	 	 42805.75	 	 120798.74	 	 215211.45	 	 308701.30	 	
No	C1	 23247.35	 	 33175.01	 	 52407.24	 	 127692.78	 	 221607.36	 	 315079.68	 	
	
Once	removing	certain	control,	it	indicates	that	such	loss’	increase	is	large.	This	means	that	such	
control	 is	 effectively.	 From	 this	 plot,	 ’Checking	 invoices	 and	 related	 documents’	 (D1)	 and	
‘Insurance	and	backup’	(C1)	are	the	most	effective	control	to	reduce	the	expected	loss.	‘Implement	
a	whistleblowing	policy’	(P2)	and	‘Checking	invoices	and	related	documents’	(D1)	are	effective	to	
reduce	the	mass	loss.	‘Internal	Audit’	(D2)	and	‘Impose	clear	segregation	of	duties’	(P3)	function	is	
not	that	obvious	if	another	control	is	set.	
Origin No	P2 No	P3 No	D1 No	D2 No	C1
99.9%VaR 302527.28	 371722.37	 302998.81	 343463.46	 308701.30	 315079.68	
50%VaR 22268.45	 29618.54	 22704.57	 38510.17	 22722.91	 33175.01	
80000.00	
130000.00	
180000.00	
230000.00	
280000.00	
330000.00	
380000.00	
430000.00	
10000.00	
15000.00	
20000.00	
25000.00	
30000.00	
35000.00	
40000.00
21
3.2	Sensitivity	analysis	for	Scenario	II	
It’s	important	to	explore	and	analyse	how	different	methods	could	reduce	and	protect	bank’s	data	
from	cyber-attacks.	At	this	scenario,	three	main	factors	can	be	recognized	to	protect	our	data	and	
recover	loss	data	such	as	the	ability	of	engineers,	solidity	of	each	firewalls,	and	backup	of	data.	
The	purpose	is	to	compare	and	draw	a	reliable	conclusion	to	see	which	is	the	most	significant	
factor,	which	strategy	could	be	used	as	most	efficient	way	to	react	and	prevent	data	sabotage.	 	
3.2.1	Analyzing	importance	of	ability	of	engineers	
As	stated	above,	frequency	of	checking	system	is	the	way	we	measure	the	capability	of	engineers	
at	 this	 scenario.	 Since	 increasing	 frequency	 of	 checking	 could	 reduce	 average	 time	 to	 detect	
infiltrating.	Therefore,	different	results	of	VaR	by	adjusting	different	values	of	frequency	could	
show	us	how	sensitive	between	ability	of	engineers	and	final	loss	dollars.	
Check	Freq	 25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
once	70	mins	 27820.00	 	 32642.00	 	 38692.00	 	 57662.00	 	 70757.35	 	 88987.40	 	
once	60	mins	 	 26932.00	 	 31143.42	 	 36216.00	 	 48334.67	 	 59349.45	 	 76068.35	 	
once	50	mins	 	 25376.00	 	 29248.00	 	 33388.00	 	 39912.00	 	 44763.71	 	 50326.59	 	
once	40	mins	 	 23306.00	 	 26910.00	 	 30740.00	 	 36652.00	 	 40856.00	 	 46206.00	 	
once	30	mins	 	 20182.00	 	 23512.00	 	 27060.00	 	 32304.00	 	 36158.00	 	 40582.00	 	
	
From	the	results	showed	in	the	graph	above,	we	find	there	is	a	positive	relationship	between	ability	
of	 engineers	 and	 data	 loss.	 Especial,	 improving	 capability	 of	 engineers	 is	 more	 efficient	 by	
considering	more	quantiles	of	value	at	risk.	There	is	big	change	between	once	70	mins,	once	60	
mins	and	once	50	mins,	it’s	efficient	and	worthy	to	improve	the	level	of	network	engineers	from	
level	(once	60	mins)	to	level	(once	50	mins)	by	considering	costs	of	network	engineers.	Of	course	
bank	could	choose	most	professional	engineers	to	protect	their	important	data	if	they	think	it’s	
10000.00	
20000.00	
30000.00	
40000.00	
50000.00	
60000.00	
70000.00	
80000.00	
90000.00	
100000.00	
20% 30% 40% 50% 60% 70% 80% 90% 100%
Imapct	of	Time	to	detect	on	VaR
once	70	mins	 once	60	mins once	50	mins	 once	40	mins	 once	30	mins
22
necessary	based	on	the	importance	of	their	data.	The	largest	change	is	70081.88	by	changing	
frequency	from	once	60	mins	to	once	30	mins.	
3.2.2	Analyzing	solidity	of	each	firewall	
Firewalls	 are	 most	 significant	 and	 usual	 method	 to	 prevent	 bank’s	 data	 from	 majority	 data	
sabotage	behaviors.	At	this	part,	we	want	to	show	how	essential	of	each	firewall	by	decreasing	
probability	of	passing	each	firewall	as	the	standard	of	improving	its	security	levels.	 	
50%	VaR	 Firewall	1	 Firewall	2	 Firewall	3	
(50%,	25%	,	5%)	 31143.42	 	 31143.42	 	 31143.42	 	
reduced	by	10%	 27972.00	 	 29394.00	 	 31058.00	 	
reduced	by	20%	 24786.00	 	 27640.00	 	 30978.00	 	
reduced	by	30%	 21704.00	 	 25772.00	 	 30916.00	 	
	
99.9%	VaR	 Firewall	1	 Firewall	2	 Firewall	3	
(50%,	25%	,	5%)	 76068.35	 	 76068.35	 	 76068.35	 	
reduced	by	10%	 69854.40	 	 74094.82	 	 74053.13	 	
reduced	by	20%	 65736.68	 	 70948.92	 	 73887.35	 	
reduced	by	30%	 61185.86	 	 65778.22	 	 69987.35	 	
10000.00	
15000.00	
20000.00	
25000.00	
30000.00	
35000.00	
(50%, 25% , 5%) reduced	by	10% reduced	by	20% reduced	by	30%
Improving	security	of	each	firewalls	with	50%	VaR	
Firewall	1	 Firewall	2 Firewall	3
23
	
From	graphs	above,	it	illustrates	that	the	security	level	is	very	sensitive	for	the	result	of	VaR,	the	
largest	change	is	69987.35	by	improving	security	level	of	firewall	1.	Therefore,	conclusion	is	made	
that	firewalls	are	essential	to	protect	bank’s	data.	
3.2.3	Impact	of	percentage	of	total	data	in	backup	on	VaR	
Normally,	bank	could	recover	their	loss	data	from	their	backup,	however	they	couldn’t	obtain	all	
data	from	their	database	backup	based	on	some	staff	miss	operations.	Therefore,	it’s	important	
to	ensure	a	bank	have	all	essential	data	backup	in	order	to	make	sure	business	work	well	even	in	
the	worst	case	that	they	lose	some	essential	data.	At	this	part,	the	percentages	of	data	in	backup	
are	changed	in	order	to	show	changes	of	VaR	and	find	a	most	efficient	way	to	recover	our	data	
after	data	sabotage.	
%	of	data	in	backup	 25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
80%	 26932.00	 	 31143.42	 	 36216.00	 	 48334.67	 	 59349.45	 	 76068.35	 	
85%	 25785.00	 	 29853.25	 	 34705.25	 	 46282.36	 	 56996.67	 	 73124.86	 	
90%	 24662.00	 	 28537.00	 	 33178.50	 	 44241.33	 	 54723.11	 	 70181.37	 	
95%	 23544.25	 	 27252.50	 	 31671.00	 	 42219.02	 	 52339.19	 	 67237.88	 	
50000.00	
55000.00	
60000.00	
65000.00	
70000.00	
75000.00	
80000.00	
(50%, 25% , 5%) reduced	by	10% reduced	by	20% reduced	by	30%
Improving	security	of	each	firewalls	with	99.9%	VaR
Firewall	1	 Firewall	2 Firewall	3
24
	
From	 above	 chart,	 it	 shows	 a	 large	 changing	 if	 increasing	 percentage	 of	 backup	 of	 client’s	
information.	Even	though	only	the	half	of	client’s	information	can	be	copied,	and	it	normally	can’t	
make	backup	of	management	information	on	time,	it	still	makes	huge	impact	on	reducing	VaR	at	
different	quantile	levels.	
3.2.4	Impact	of	different	firewalls	
Changing	the	number	of	firewalls	can	be	used	to	find	a	better	way	of	building	firewall.	Above	all,	
‘3	firewalls’	is	the	initial	condition	of	bank.	What	if	bank	reduce	the	number	of	firewalls	to	2?	At	
the	same	time,	adjusting	some	parameters	is	necessary	to	fit	the	data.	Comparing	the	results	to	
find	strategic	options	for	bank’s	network	system.	
	 Before	changing	 3	Firewalls	Structure	 After	changing	 2	Firewalls	Structure	
Time	of	break	the	
firework(min)	
1
st
	firewall	 5	 1
st
	firewall	 15	
2
nd
	firewall	 15	 2
nd
	firewall	 50	
3
rd
	firewall	 45	 	
Probability	of	
break	the	firework	
1
st
	firewall	 0.5	 1
st
	firewall	 0.2	
2
nd
	firewall	 0.25	 2
nd
	firewall	 0.04	
3
rd
	firewall	 0.05	 	
Data	volume	
proportion	behind	
the	firework	
1
st
	firewall	 0.05	 1
st
	firewall	 0.2	
2
nd
	firewall	 0.15	 2
nd
	firewall	 0.8	
3
rd
	firewall	 0.8	 	
Data	value	behind	
the	firework	(unit	
value)	($)	
1
st
	firewall	 10	 1
st
	firewall	 17.5	
2
nd
	firewall	 20	 2
nd
	firewall	 50	
3
rd
	firewall	 50	 	
After	using	the	same	algorithm,	following	result	can	be	showed.	
	 25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
20000.00	
30000.00	
40000.00	
50000.00	
60000.00	
70000.00	
80000.00	
20% 30% 40% 50% 60% 70% 80% 90% 100%
Impact	of	Percentage	of	data	in	backup	on	VaR
80%	Backup 85%	Backup 90%	Backup 95%	Backup
25
3	firewall	 26876.00	 	 31220.00	 	 36354.65	 	 48574.14	 	 59106.87	 	 74493.84	 	
2	firewall	 26880.00	 	 31817.38	 	 37716.00	 	 46984.00	 	 54560.53	 	 64589.95	 	
	
From	this	plot,	a	clear	phenomenon	can	be	saw.	For	lower	expected	loss,	2	firewall	system	if	
preferable,	which	has	lower	value	at	50%	VaR.	As	for	lower	mass	loss,	3	firewall	system	is	preferred.	
3.3	Sensitivity	Analysis	for	Aggregated	Scenario	
Based	on	aggregated	scenario	generated	before,	the	only	independent	parameter	is	the	correlated	
parameter.	Then	this	can	be	adjusted	to	explore	the	relationship	between	the	total	loss	and	two	
individual	 losses.	 The	 following	 adjustments	 have	 been	 finished	 in	 this	 part	 on	 the	 correlated	
parameters	to	see	the	change	of	VaR.	In	the	following	table,	0	means	that	no	correlated	between	
two	scenarios.	
Correlation	 25%VaR	 50%VaR	 75%VaR	 95%VaR	 99%VaR	 99.9%VaR	
0	 30254.03	 	 38285.72	 	 55419.95	 	 127869.93	 	 219098.66	 	 311947.77	 	
0.3	 33734.71	 	 43380.94	 	 63110.38	 	 140655.30	 	 235615.27	 	 333344.57	 	
0.7	 37881.34	 	 49363.13	 	 72099.36	 	 156081.73	 	 255985.02	 	 359899.80	 	
1	 40715.10	 	 53411.87	 	 78165.64	 	 166717.43	 	 270256.66	 	 378595.63	 	
20000.00	
30000.00	
40000.00	
50000.00	
60000.00	
70000.00	
80000.00	
20% 30% 40% 50% 60% 70% 80% 90% 100%
Impact	of	Different	Firewall	Structures	on	VaR
2	firewall 3	firewall
26
	
From	the	plot,	it	shows	that	higher	stronger	correlation	gets	higher	VaR	both	in	aspect	of	expected	
loss	and	extreme	loss.	The	explanation	might	be	this	–	once	one	of	the	scenario	loss	happen,	it	
means	 that	 the	 probability	 of	 risk	 factor	 is	 relatively	 high.	 In	 this	 way,	 as	 the	 existence	 of	
correlation,	higher	risk	factor	also	causes	cause	loss	on	another	scenario.	
	
4.Alternative	Adjustment	on	Loss	Measure	Quantile	
H	Next,	introducing	cluster	method	aims	to	improve	the	result	of	VaR,	and	this	approach	is	worthy	
for	generating	new	VaR	quantiles	based	on	severity,	which	enables	one	to	combine	expert	opinion	
scenarios	with	quantitative	operational	risk	data.	This	methodology	was	firstly	proposed	by	Dr.	
Sovan	Mitra	in	2013	by	using	the	key	idea	from	machine	learning.	[12]
	
4.1	Introduction	to	Cluster	Analysis	
To	achieve	scenario	adjustment,	clustering	analysis	can	be	applied	to	match	severity	magnitude.	
Clustering	is	a	method	of	grouping	data	into	subsets	of	data,	which	are	also	known	as	clusters.	
Moreover,	K-means	clusters	analysis	is	one	kind	of	unsupervised	learning,	which	is	one	subject	of	
machine	learning.	Unsupervised	learning	is	a	way	to	explore	the	common	feature	of	data	by	a	
particular	algorithm.	K-means	algorithm	is	a	simple	iterative	clustering	algorithm.	It	uses	distance	
(e.g.	Euclidean	distance)	as	the	similarity	index	to	find	a	given	data	set	of	K	classes.	Each	centre	of	
class	is	obtained	by	the	mean	of	all	the	value	in	such	class.	Each	class	is	described	as	the	clustering	
centre.	
0.00	
50000.00	
100000.00	
150000.00	
200000.00	
250000.00	
300000.00	
350000.00	
400000.00	
20% 30% 40% 50% 60% 70% 80% 90% 100%
0 0.3 0.7 1
27
4.2	Application	on	Adjustment	of	Scenario	Result	
The	following	is	the	basic	steps	of	K-mean	clusters	algorithm.	
Step	1:	Select	K	objects	in	the	data	space	as	the	initial	centre.	Each	object	represents	a	cluster	
centre.	
Step	2:	For	every	data	objects	in	the	sample,	we	calculate	the	Euclidean	distance	between	it	and	
the	cluster	centres.	Then	different	data	are	grouped	according	to	the	nearest	criterion	and	are	
divided	into	the	corresponding	classes	of	nearest	cluster	centres.	
Step	3:	Update	the	cluster	centre	-	the	mean	values	of	all	the	objects	in	each	category	are	dealt	as	
the	cluster	centre	of	the	class.	Then	the	value	of	the	objective	function	can	be	computed.	
Step	4:	Determining	wether	the	cluster	centre	and	the	value	of	objective	function	are	changed	or	
not.	If	they	both	stay	the	same,	output	the	results;	if	changed,	then	turn	back	to	step	2.	
Using	the	above	algorithm,	the	MC	simulation	result	is	used	as	the	sample	class.	The	VaRs’	intervals	
of	the	cluster	centre	of	each	interval	are	modified.	The	results	are	showed	as	following.	
For	scenario	1	-	asset	misappropriation	
For	scenario	2	-	cyber	attack	
	
	
	
4.3	Important	Meaning	to	Loss	Measure	Quantile	
The	interval	point	of	VaR	(25%,	50%,	75%,	95%,	99%,	99.5%)	is	based	on	the	empirical	judgement.	
In	 common	 situation,	 these	 quantiles	 are	 fixed	 as	 a	 standard	 for	 operational	 risk	 modelling.	
However,	setting	these	interval	points	cannot	clearly	reflect	the	feature	of	a	different	distribution.	
Cluster	method	introduces	an	effective	way	to	reflect	the	feature	of	distribution	in	several	intervals	
and	the	loss	level	on	each	interval	at	the	same	time.	This	is	very	important	to	improve	the	loss	
measure	 quantile.	 In	 our	 result,	 although	 the	 fixed	 interval	 points	 have	 change,	 the	 modified	
outcome	 can	 reflect	 average	 VaR	 level	 in	 6	 different	 intervals.	 It	 can	 also	 reflect	 relative	
relationship	of	individual	interval	among	the	overall	distribution	of	loss.	
	
Unmodified	
25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
13783.10	 22268.45	 41949.64	 118382.76	 210907.22	 302527.28	
Modified	
48.1%	 76.3%	 92.4%	 96.5%	 99.8%	 100.0%	
21335.32	 43380.22	 84397.50	 151063.49	 257611.43	 429427.17	
Unmodified	
25%	VaR	 50%	VaR	 75%	VaR	 95%	VaR	 99%	VaR	 99.9%	VaR	
26932.00	 	 31143.42	 	 36216.00	 	 48334.67	 	 59349.45	 	 76068.35	 	
Modified	
31.6%	 66.6%	 88.7%	 97.1%	 99.7%	 100.0%	
28136.00	 	 34181.89	 	 41909.31	 	 52582.55	 	 66845.03	 	 90931.40
28
5.Conclusion	
In	conclusion,	the	loss	distributions	can	be	generated	for	scenarios	asset	misappropriation	and	
cyber-attack	and	combined	scenarios	of	both	of	them;	based	on	our	scenarios	analysis,	sensitivity	
analysis	of	scenarios	is	useful	to	assist	us	to	derive	most	essential	factors	for	operational	risks	as	
the	basis	of	strategic	suggestions	to	managers.
5.1	Discussion	of	strategic	options	
At	this	part,	the	specific	strategies	are	discussed	separately	for	asset	misappropriation	(scenario	1)	
and	cyber-attack	(scenario	2)	for	our	bank.	 	
In	 scenario	 1,	 firstly,	 it	 illuminates	 that	 internal	 fraudsters	 within	 bank	 regarding	 asset	
misappropriation	are	from	top	two	levels	employees	within	bank	covering	the	head	of	a	bank	and	
vice-presidents,	managers	or	directors.	Out	of	the	abuse	of	their	authority,	they	could	easily	access	
and	occupy	bank’s	asset	without	supervision.	Once	this	events	happened,	it	almost	surely	will	
cause	huge	losses	for	bank.	Therefore,	we	strongly	suggest	our	bank	to	invoke	third	party	as	special	
fair	asset	management	platform	to	record	and	check	the	high-level	employees’	applications	of	
their	authority	especially	for	assets	of	bank.	Next,	whistleblowing	is	also	a	highly	efficient	control	
to	reduce	OR	losses	in	scenario	1.	Based	on	our	scenario	data,	it	shows	whistleblowing	scheme	
within	same	level	employees	or	between	different	levels	employees	donates	huge	contribution	of	
operational	 risk	 management	 under	 this	 circumstance	 compared	 with	 other	 controls.	 Hence,	
whistleblowing	should	be	spread	out	with	certain	bonus	to	help	bank	to	create	this	scheme	and	
form	employee	whistleblowing	awareness.
In	scenario	2,	cyber-attack	is	normally	caused	by	external	intended	attack	to	bank’s	information	
network	system.	Hence,	we	can	think	this	as	the	battle	between	our	information	security	engineers	
and	hackers.	It’s	efficient	if	we	decrease	detection	gap	time	of	engineers	from	once	70	minutes	to	
once	50	minutes;	however,	it	has	low	effect	if	we	try	to	reduce	further	from	50	minutes	with	high	
expenses.	 It	 may	 be	 caused	 ability	 of	 engineers	 from	 50	 minutes	 has	 exceeded	 the	 ability	 of	
majority	 hackers.	 As	 for	 firewalls,	 more	 firewalls	 can	 reduce	 the	 data	 losses	 of	 essential	
information	and	cause	more	losses	of	nonessential	data.	Since	we	measure	the	same	level	of	our	
firewalls,	we	assume	that	firewalls	will	have	stronger	ability	to	prevent	our	network	from	hackers’	
attacks.	Then,	results	show	that	we	may	lose	more	core	information	in	our	bank	and	less	loss	of	
normal	data	under	less	number	of	firewalls	compared	with	multi-complex	firewalls.	Based	the	type	
of	information	that	bank	want	to	protect,	managers	can	change	their	strategies	and	adjust	it	if	
necessary.	
From	 dependency	 analysis	 in	 our	 combined	 scenario,	 the	 result	 proves	 that	 the	 quality	 of	
employees	is	key	risk	drivers	of	both	scenarios;	hence,	it’s	necessary	to	improve	bank’s	recruitment	
procedure	and	vet	CV	as	well	as	references.
29
5.2	Limitation	and	Improvement	
In	this	paper,	some	essential	parameters	of	our	scenarios,	we	simply	use	the	expert’s	opinions	and	
historical	 loss	 distributions	 which	 may	 result	 in	 cognition	 biases	 from	 the	 real	 market	 and	
predictions	caused	by	the	uncertainties	of	future	business	environment.	Hence,	the	parameters	in	
our	scenarios	should	be	assumed	based	on	both	internal	and	external	experts	as	well	as	reasonable	
assumptions	of	future	changes	for	local	and	global	circumstances.	If	necessarily,	we	ought	to	be	
conservative	on	parameter	assumptions	for	some	sensitive	factors.	Moreover,	it	can	be	more	
flexible	 on	 changes	 of	 parameters;	 for	 instances,	 hackers’	 ability	 should	 be	 adjusted	 more	
randomly	and	more	unpredicted	for	simulating	realistic	cases.	The	advanced	dependency	structure	
can	be	applied	here	to	attribute	different	risk	drivers	to	scenarios.	In	this	way,	more	appropriate	
correlation	and	variance	matrix	can	be	generated	to	combine	two	scenarios.	 	
	
6.Reference	
[1] K.	van	der	Heijden,	Scenarios:	The	Art	of	Strategic	Conversation,	Wiley,	Chichester,	1996.	
[2] T.J.	Postma	and	F.	Liebl,	How	to	improve	scenario	analysis	as	a	strategic	management	tool,	
Technological	Forecasting	&	Social	Change	72	(2005)	161–173	
[3] P.J.H.	Schoemaker,	C.A.J.M.	van	der	Heijden,	Integrating	scenarios	into	strategic	planning	
at	Royal	Dutch/Shell,	Plann.	 
Rev.	20	(3)	(1992)	41–48.	 
	
[4] K.	van	der	Heijden,	Scenarios:	The	Art	of	Strategic	Conversation,	Wiley,	Chichester,	1996.	 	
[5] M.	Godet,	Scenarios	and	Strategic	Management,	Butterworth,	London,	1987.	 
	
[6] W.R.	Huss,	A	move	toward	scenario	analysis,	Int.	J.	Forecast.	4	(1988)	377–388.	 
	
[7] M.E.	 Porter,	 Competitive	 Advantage—Creating	 and	 Sustaining	 Superior	 Performance,	
Free	Press,	New	York,	1985.	 
	
[8] P.	Schwartz,	The	Art	of	the	Long	View:	Planning	for	the	Future	in	an	Uncertain	World,	
Doubleday	Currency,	New	York,	 
1991.	 
	
[9] U.	von	Reibnitz,	Scenario	Techniques,	McGraw-Hill,	Hamburg,	1988.	 
	
[10]G.	Ringland,	Scenario	Planning:	Managing	for	the	Future,	Wiley,	Chichester,	1998. 
	
[11]R.P.	Bood,	Th.J.B.M.	Postma,	Strategic	learning	with	scenarios,	Eur.	Manag.	J.	15	(6)	(1997)	
633–647.	 
	
[12]S.	 Mitar,	 Scenario	 Generation	 for	 Operational	 Risk,	Intelligent	 Systems	 In	 Accounting,	
Finance	And	Management,	20(2013),	163–187.
30
[13]E.	 Barbieri	 Masini,	 J.	 Medina	 Vasquez,	 Scenarios	 as	 seen	 from	 a	 human	 and	 social	
perspective,	Technol.	Forecast.	Soc.	Change	65	(1)	(2000)	49–66.	
	
[14]K.	van	der	Heijden,	R.	Bradfield,	G.	Burt,	G.	Cairns,	G.	Wright,	The	Sixth	Sense:	Accelerating	
Organizational	Learning	with	Scenarios,	Wiley,	Chichester,	2002.	
	
[15]	 J.	Corrigan	et	al,	Milliman	Reserch	Report:	Aggregation	of	Risks	and	Allocation	of	Capital,	
2009.	
	
7.Appendix	
1. Codes	for	Scenario	I	based	on	Matlab	
clear;close all;clc
rand('state',0); % fix random number, good for sensitivity
randn('seed',0); % fix random number
H=2000; % total employees
Hlevel=[1200 600 180 20]; % employees level number
ptheft=[.1 .1 .05 .05]; % criminal probability
muthe=[10 20 100 1000]; % asset mu
sigmathe=[3 6 30 300]; % asset sigma
percentage=[.5 .75 .9]; % volume of asset in different level
itemrange=[15 35 65 100]; % level setting
whithe=0.5; % whistleblowing probability
segthe=0.2; % cross-deppartment probability
minuamou=0.8; % proportion of access to cross-asset
pplevel=[.5 .25 .1]; % cross-level probability
severi=[1 1.2 1.44 1.728]; % severity
Sevinteadu=0.98; % internal audit
insran=[0 .7 .5 0]; % insurance proportion
N=10000;
for i=1:N
% P1 - Vet employees by CV and references
ntheft(1)=binornd(Hlevel(1),ptheft(1),1,1);
ntheft(2)=binornd(Hlevel(2),ptheft(2),1,1);
ntheft(3)=binornd(Hlevel(3),ptheft(3),1,1);
ntheft(4)=binornd(Hlevel(4),ptheft(4),1,1);
for ii=1:4
sumtiWU(ii)=0;sumtiP2(ii)=0;sumtiD1(ii)=0;sumtiQU(ii)=0;
if ntheft(ii)==0 % amou(ii)=0; jthe(ii)=0; sxx(ii)=0;
ppp(ii)=0;
break;
31
end
for j=1:ntheft(ii)
% decide amount
amou(ii)=ceil(normrnd(muthe(ii),sigmathe(ii)));
% decide values
xx=rand();
if xx<=percentage(1) sxx(ii)=rand()*10;
elseif xx<=percentage(2) sxx(ii)=rand()*20+10;
elseif xx<=percentage(3) sxx(ii)=rand()*30+30;
else sxx(ii)=rand()*40+60;
end
% decide levels
if sxx(ii)<=itemrange(1) jthe(ii)=1;
elseif sxx(ii)<=itemrange(2) jthe(ii)=2;
elseif sxx(ii)<=itemrange(3) jthe(ii)=3;
else jthe(ii)=4;
end
QUQU=1;
% P2 - Implement a whistleblowing policy
if (ii==jthe(ii)) && (rand()<=whithe) QUQU=0; end
% P3 - Impose clear segregation of duties
if (ii~=4)&&(rand()<=segthe)
amou(ii)=ceil(amou(ii)*minuamou); end
% P4 - Control access to buildings and systems
if sxx(ii)<=itemrange(1) ppp(ii)=1;
elseif sxx(ii)<=itemrange(2)
ppp(ii)=1*(ii>=2)+(ii==1)*(rand()<pplevel(1));
elseif sxx(ii)<=itemrange(3)
ppp(ii)=1*(ii>=3)+(ii==1)*(rand()<pplevel(1))*(rand()<pplevel(2))+(
ii==2)*(rand()<pplevel(2));
else
ppp(ii)=(ii==4)+(ii==1)*(rand()<pplevel(1))*(rand()<pplevel(2))*(ra
nd()<pplevel(3))+(ii==2)*(rand()<pplevel(2))*(rand()<pplevel(3))+(i
i==3)*(rand()<pplevel(3));
end
DDD=1;
% D1 - Checking invoices and related documents
if ii~=jthe(ii) DDD=0.5; end
% C1 - Insurance + C2 - Tackle relevant employees
sumtiQU(ii)=sumtiQU(ii)+amou(ii)*sxx(ii)*ppp(ii)*severi(ii)*(1-
insran(ii))*DDD*QUQU;
32
end
%D2 - Internal Audit
sumtheQU(i)=sum(sumtiQU)*Sevinteadu;
end
end
hist(sumtheQU,1000);
% percentile selection of the convoluted distributions
VARQU=prctile(sumtheQU,[25, 50, 75, 95, 99, 99.9]	
	
2. Codes	for	Scenario	II	based	on	Matlab	
rand('state',0);
randn('seed',0);
H=100; % possible attack
Efrequency=60; % Engineers check system once an hour
amoutdata=10000; % assume there are 10000 nits of data
fiwotime=[5 15 45]; % time used by hackers to pass each firewalls
probattk=[.5 .25 .05];% probability of hackers pass each firewalls
perdata=[.05 .1 .85]; % percentage of data hackers pass each firewall
valdata=[10 20 50]; % dollars per unit of data
percentpermin=.05; % data loss rate when hackers pass third firewall
percentdata=.5; %the proportion of clients’ data
backupdata=.8; % back up 80% of clients' data
percentage=[.6 .9 .95 .975 .99];
N=10000; % times that Monte Carlo runs
for ii=1:N
vnlost(ii)=0;
for i=1:H
restime=rand()*Efrequency;
if restime<fiwotime(1) srr=0;svv=0;
elseif restime<fiwotime(2)
srr=(rand()<probattk(1))*perdata(1); svv=srr*valdata(1);
elseif restime<fiwotime(3)
srr=(rand()<probattk(1))*(perdata(1)+(rand()<probattk(2))*perdata(2
));
svv=srr*valdata(1)+(srr>perdata(1))*(srr-
perdata(1))*(valdata(2)-valdata(1));
else
srr=(rand()<probattk(1))*(perdata(1)+(rand()<probattk(2))*(perdata(
2)+(rand()<probattk(3))*(restime-fiwotime(3))*percentpermin));
svv=srr*valdata(1)+(srr>perdata(1))*(srr-
perdata(1))*(valdata(2)-
33
valdata(1))+(srr>(perdata(1)+perdata(2)))*(srr-perdata(1)-
perdata(2))*(valdata(3)-valdata(2));
end
vlost(i)=svv*amoutdata;
%backup of loss data in clients information
%vlosta are divided into 100 units, 50% client 50% management
client's infor with 80%back up
veachlost(i)=vlost(i)/100;
for j=1:100
vback(j)=(rand()<percentdata)*backupdata*veachlost(i);
vlost(i)=vlost(i)-vback(j);
end
vnlost(ii)=vnlost(ii)+vlost(i);
end
end
hist(vlost,1000);
% plot of the results
VAR=prctile(vlost,[25, 50, 75, 95, 99, 99.9])
% percentile selection of the convoluted distributions	
	
3. Codes	for	Aggregated	Scenario	based	on	Matlab	
X1=sort(vnlost);
X2=sort(sumtheQU);
corr=[0 .3 .7 1]; % correlation
output=[]
for j=1:4
ROU=[1 corr(j);corr(j) 1]; % correlation matrix
for i=1:N
X=[X1(i) X2(i)];
XBOTH(i)=sqrt(X*ROU*X');
end
VARboth=prctile(XBOTH,[25, 50, 75, 95, 99, 99.9])
plot([25, 50, 75, 95, 99, 99.9],VARboth)
output=[output;VARboth]
hold on,
end
output
	
	
4. K-mean	cluster	algorithm	based	on	Matlab
34
Q=VARQU; %VAR
n=X2; % LOSS
PEC=[25 50 75 95 99 99.9]; % PERCENTAGE
k=[0 0 0 0 0 0]; % LOCATION
SUI1=[0 0 0 0 0 0]; % AMOUNT OF EACH GROUP
SUM1=Q;
SUM2=Q;
%n=gamrnd(2,20000,10000,1);
subplot(1,2,1)
hist(n,1000);
subplot(1,2,2);
%plot([25, 50, 75, 95, 99, 99.9],SUM1,'-O');
while 1
SUM1=[0 0 0 0 0 0];
% grouping
for j=1:10000
for i=1:6
k(i)=abs(SUM2(i)-n(j));
end
m=min(k);
[xx]=find(k==m);
SUM1(xx)=SUM1(xx)+n(j);
SUI1(xx)=SUI1(xx)+1;
end
% K-means K=6
SUL(1)=0;
for i=1:6
SUM1(i)=SUM1(i)/SUI1(i);
SUL(i+1)=SUL(i)+SUI1(i);
end
for i=1:6
SULL(i)=SUL(i+1);
SSS(i)=n(SULL(i));
end
%disp(SULL);
%disp(SUM1);
SUI1=[0 0 0 0 0 0];
% convergence condition
35
if max(abs(SUM1-SUM2)./SUM2)<=0.05 break;
end
SDASSDA=6;
SUM2=SUM1;
hold on,
plot(SULL(1:SDASSDA)/100,SSS(1:SDASSDA));
end
hhhh=[SULL;SSS;PEC*100;Q]
hold on,
plot(SULL(1:SDASSDA)/100,SSS(1:SDASSDA),'LineWidth',3);
hold on,
plot(PEC(1:SDASSDA),Q(1:SDASSDA),'-O');

Contenu connexe

Tendances

BlackLine-Executive-Summary-with-Product-Overviews
BlackLine-Executive-Summary-with-Product-OverviewsBlackLine-Executive-Summary-with-Product-Overviews
BlackLine-Executive-Summary-with-Product-Overviews
José Luiz Moço
 
Salesforce Service Cloud 2
Salesforce Service Cloud 2Salesforce Service Cloud 2
Salesforce Service Cloud 2
fishman29
 
Integração SAP com Plataformas 100% OpenSource
Integração SAP com Plataformas 100% OpenSourceIntegração SAP com Plataformas 100% OpenSource
Integração SAP com Plataformas 100% OpenSource
WSO2
 

Tendances (20)

Governance Risk and Compliance for SAP
Governance Risk and Compliance for SAPGovernance Risk and Compliance for SAP
Governance Risk and Compliance for SAP
 
BlackLine-Executive-Summary-with-Product-Overviews
BlackLine-Executive-Summary-with-Product-OverviewsBlackLine-Executive-Summary-with-Product-Overviews
BlackLine-Executive-Summary-with-Product-Overviews
 
Salesforce overview
Salesforce overviewSalesforce overview
Salesforce overview
 
Real-time Streaming and Querying with Amazon Kinesis and Amazon Elastic MapRe...
Real-time Streaming and Querying with Amazon Kinesis and Amazon Elastic MapRe...Real-time Streaming and Querying with Amazon Kinesis and Amazon Elastic MapRe...
Real-time Streaming and Querying with Amazon Kinesis and Amazon Elastic MapRe...
 
Introduction to Salesforce Platform - Basic
Introduction to Salesforce Platform - BasicIntroduction to Salesforce Platform - Basic
Introduction to Salesforce Platform - Basic
 
AWS Adoption in FSI
AWS Adoption in FSIAWS Adoption in FSI
AWS Adoption in FSI
 
Salesforce.com Overview
Salesforce.com OverviewSalesforce.com Overview
Salesforce.com Overview
 
Salesforce Service Cloud 2
Salesforce Service Cloud 2Salesforce Service Cloud 2
Salesforce Service Cloud 2
 
Introduction To IPaaS: Drivers, Requirements And Use Cases
Introduction To IPaaS: Drivers, Requirements And Use CasesIntroduction To IPaaS: Drivers, Requirements And Use Cases
Introduction To IPaaS: Drivers, Requirements And Use Cases
 
How Salesforce CRM Improves Your Sales Pipeline?
How Salesforce CRM Improves Your Sales Pipeline?How Salesforce CRM Improves Your Sales Pipeline?
How Salesforce CRM Improves Your Sales Pipeline?
 
Salesforce Intro
Salesforce IntroSalesforce Intro
Salesforce Intro
 
Splunk for Enterprise Security featuring User Behavior Analytics
Splunk for Enterprise Security featuring User Behavior Analytics Splunk for Enterprise Security featuring User Behavior Analytics
Splunk for Enterprise Security featuring User Behavior Analytics
 
Salesforce Shield: How to Deliver a New Level of Trust and Security in the Cloud
Salesforce Shield: How to Deliver a New Level of Trust and Security in the CloudSalesforce Shield: How to Deliver a New Level of Trust and Security in the Cloud
Salesforce Shield: How to Deliver a New Level of Trust and Security in the Cloud
 
Predictable Revenue: Create Predictable & Scalable Revenue - Aaron Ross
Predictable Revenue: Create Predictable & Scalable Revenue - Aaron RossPredictable Revenue: Create Predictable & Scalable Revenue - Aaron Ross
Predictable Revenue: Create Predictable & Scalable Revenue - Aaron Ross
 
Integração SAP com Plataformas 100% OpenSource
Integração SAP com Plataformas 100% OpenSourceIntegração SAP com Plataformas 100% OpenSource
Integração SAP com Plataformas 100% OpenSource
 
Extend SAP S/4HANA to deliver real-time intelligent processes
Extend SAP S/4HANA to deliver real-time intelligent processesExtend SAP S/4HANA to deliver real-time intelligent processes
Extend SAP S/4HANA to deliver real-time intelligent processes
 
Salesforce Presentation
Salesforce PresentationSalesforce Presentation
Salesforce Presentation
 
AWS Black Belt Tips
AWS Black Belt TipsAWS Black Belt Tips
AWS Black Belt Tips
 
Kyzer Software Trade Finance Process Automation
Kyzer Software Trade Finance Process AutomationKyzer Software Trade Finance Process Automation
Kyzer Software Trade Finance Process Automation
 
Salesforce Sales Cloud: Best Practices to Win More Deals
Salesforce Sales Cloud: Best Practices to Win More DealsSalesforce Sales Cloud: Best Practices to Win More Deals
Salesforce Sales Cloud: Best Practices to Win More Deals
 

En vedette

2016,Problem B
2016,Problem B2016,Problem B
2016,Problem B
RUIXIN BAO
 
Scenario Testing
Scenario TestingScenario Testing
Scenario Testing
realbot
 
14 analysis techniques
14 analysis techniques14 analysis techniques
14 analysis techniques
Majong DevJfu
 
Project Risk Management (10)
 Project Risk Management (10) Project Risk Management (10)
Project Risk Management (10)
Serdar Temiz
 
scenario analysis
scenario analysisscenario analysis
scenario analysis
joerizk
 

En vedette (20)

Sensitivity analysis
Sensitivity analysisSensitivity analysis
Sensitivity analysis
 
Sensitivity Analysis
Sensitivity AnalysisSensitivity Analysis
Sensitivity Analysis
 
2016,Problem B
2016,Problem B2016,Problem B
2016,Problem B
 
Risk Governance: the challenge of risk transfer instruments and catastrophic ...
Risk Governance: the challenge of risk transfer instruments and catastrophic ...Risk Governance: the challenge of risk transfer instruments and catastrophic ...
Risk Governance: the challenge of risk transfer instruments and catastrophic ...
 
SCENARIO DAMAGE ANALYSIS OF RC PRECAST INDUSTRIAL STRUCTURES IN TUSCANY, ITALY
SCENARIO DAMAGE ANALYSIS OF RC PRECAST INDUSTRIAL STRUCTURES IN TUSCANY, ITALYSCENARIO DAMAGE ANALYSIS OF RC PRECAST INDUSTRIAL STRUCTURES IN TUSCANY, ITALY
SCENARIO DAMAGE ANALYSIS OF RC PRECAST INDUSTRIAL STRUCTURES IN TUSCANY, ITALY
 
CH&Cie_GRA_Stress-testing offer
CH&Cie_GRA_Stress-testing offerCH&Cie_GRA_Stress-testing offer
CH&Cie_GRA_Stress-testing offer
 
Scenario Testing
Scenario TestingScenario Testing
Scenario Testing
 
Icef miami 2014 risk reward
Icef miami 2014 risk rewardIcef miami 2014 risk reward
Icef miami 2014 risk reward
 
aaoczc2252
aaoczc2252aaoczc2252
aaoczc2252
 
Project risk management
Project risk managementProject risk management
Project risk management
 
Philippe Cotelle’s presentation on SPICE at AIRBUS, FERMA Forum 2015
Philippe Cotelle’s presentation on SPICE at AIRBUS, FERMA Forum 2015Philippe Cotelle’s presentation on SPICE at AIRBUS, FERMA Forum 2015
Philippe Cotelle’s presentation on SPICE at AIRBUS, FERMA Forum 2015
 
Operational Risk Loss Forecasting Model for Stress Testing
Operational Risk Loss Forecasting Model for Stress TestingOperational Risk Loss Forecasting Model for Stress Testing
Operational Risk Loss Forecasting Model for Stress Testing
 
PECB Webinar: Risk Treatment according to ISO 27005
PECB Webinar: Risk Treatment according to ISO 27005PECB Webinar: Risk Treatment according to ISO 27005
PECB Webinar: Risk Treatment according to ISO 27005
 
14 analysis techniques
14 analysis techniques14 analysis techniques
14 analysis techniques
 
Fire risk analysis of structures and infrastructures: theory and application ...
Fire risk analysis of structures and infrastructures: theory and application ...Fire risk analysis of structures and infrastructures: theory and application ...
Fire risk analysis of structures and infrastructures: theory and application ...
 
Operational Risk &amp; Basel Ii
Operational Risk &amp; Basel IiOperational Risk &amp; Basel Ii
Operational Risk &amp; Basel Ii
 
Scenario analysis
Scenario analysisScenario analysis
Scenario analysis
 
Project Risk Management (10)
 Project Risk Management (10) Project Risk Management (10)
Project Risk Management (10)
 
Sensitivity, specificity and likelihood ratios
Sensitivity, specificity and likelihood ratiosSensitivity, specificity and likelihood ratios
Sensitivity, specificity and likelihood ratios
 
scenario analysis
scenario analysisscenario analysis
scenario analysis
 

Similaire à Scenario Models and Sensitivity Analysis in Operational Risk

Dissertation Final
Dissertation FinalDissertation Final
Dissertation Final
Gavin Pearce
 
Emergency Planning Independent Study 235.b
Emergency Planning  Independent Study 235.b  Emergency Planning  Independent Study 235.b
Emergency Planning Independent Study 235.b
MerrileeDelvalle969
 
How does Project Risk Management Influence a Successful IPO Project.doc
How does Project Risk Management Influence a Successful IPO Project.docHow does Project Risk Management Influence a Successful IPO Project.doc
How does Project Risk Management Influence a Successful IPO Project.doc
Dịch vụ viết thuê đề tài trọn gói 👉👉 Liên hệ ZALO/TELE: 0917.193.864 ❤❤
 
Dissertation - Submission version
Dissertation - Submission versionDissertation - Submission version
Dissertation - Submission version
tmelob_souto
 

Similaire à Scenario Models and Sensitivity Analysis in Operational Risk (20)

Web2.0 And Business Schools Dawn Henderson
Web2.0 And Business Schools   Dawn HendersonWeb2.0 And Business Schools   Dawn Henderson
Web2.0 And Business Schools Dawn Henderson
 
Research handbook
Research handbookResearch handbook
Research handbook
 
Engineers and Managers, A Multi-perspective Analysis of Conflict
Engineers and Managers, A Multi-perspective Analysis of ConflictEngineers and Managers, A Multi-perspective Analysis of Conflict
Engineers and Managers, A Multi-perspective Analysis of Conflict
 
Analysis tekla
Analysis teklaAnalysis tekla
Analysis tekla
 
Research by pk scholar
Research by pk scholarResearch by pk scholar
Research by pk scholar
 
Masters Dissertation
Masters DissertationMasters Dissertation
Masters Dissertation
 
Industry project developling full it software solutions and project management
Industry project developling full it software solutions and project managementIndustry project developling full it software solutions and project management
Industry project developling full it software solutions and project management
 
The Business Model Design of Social Enterprise
The Business Model Design of Social EnterpriseThe Business Model Design of Social Enterprise
The Business Model Design of Social Enterprise
 
Dissertation Final
Dissertation FinalDissertation Final
Dissertation Final
 
Acupuncturists As Entrepreneurs Experiences Of New Professionals Founding Pr...
Acupuncturists As Entrepreneurs  Experiences Of New Professionals Founding Pr...Acupuncturists As Entrepreneurs  Experiences Of New Professionals Founding Pr...
Acupuncturists As Entrepreneurs Experiences Of New Professionals Founding Pr...
 
THE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKS
THE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKSTHE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKS
THE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKS
 
NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
 
Emergency Planning Independent Study 235.b
Emergency Planning  Independent Study 235.b  Emergency Planning  Independent Study 235.b
Emergency Planning Independent Study 235.b
 
Emergency planning independent study 235.b
Emergency planning  independent study 235.b  Emergency planning  independent study 235.b
Emergency planning independent study 235.b
 
Analysis of australian organizations based on the nine dimensions approach
Analysis of australian organizations based on the nine dimensions approachAnalysis of australian organizations based on the nine dimensions approach
Analysis of australian organizations based on the nine dimensions approach
 
How does Project Risk Management Influence a Successful IPO Project.doc
How does Project Risk Management Influence a Successful IPO Project.docHow does Project Risk Management Influence a Successful IPO Project.doc
How does Project Risk Management Influence a Successful IPO Project.doc
 
Dissertation - Submission version
Dissertation - Submission versionDissertation - Submission version
Dissertation - Submission version
 
Sales and operations planning a research synthesis
Sales and operations planning  a research synthesisSales and operations planning  a research synthesis
Sales and operations planning a research synthesis
 
MBA Dissertation Thesis
MBA Dissertation ThesisMBA Dissertation Thesis
MBA Dissertation Thesis
 
Green Computing Research: Project management report
Green Computing Research: Project management reportGreen Computing Research: Project management report
Green Computing Research: Project management report
 

Scenario Models and Sensitivity Analysis in Operational Risk

  • 2. 2 Content 1.Introduction ........................................................................................................... 3 1.1 Research Objective ......................................................................................... 3 1.2 Literature Review ........................................................................................... 4 1.3 Research Procedure ....................................................................................... 4 2.Scenarios Generation ............................................................................................. 5 2.1 Scenario I – Asset Misappropriation ............................................................... 5 2.2 Scenario II – Data loss by Cyber Attack .......................................................... 9 2.3 Aggregated Scenario .................................................................................... 11 3.Sensitivity Analysis ............................................................................................... 14 3.1 Sensitivity analysis for Scenario I .................................................................. 14 3.2 Sensitivity analysis for Scenario II ................................................................. 21 3.3 Sensitivity Analysis for Aggregated Scenario ................................................ 25 4.Alternative Adjustment on Loss Measure Quantile ............................................. 26 4.1 Introduction to Cluster Analysis ................................................................... 26 4.2 Application on Adjustment of Scenario Result ............................................. 27 4.3 Important Meaning to Loss Measure Quantile ............................................ 27 5.Conclusion ............................................................................................................ 28 5.1 Discussion of strategic options ..................................................................... 28 5.2 Limitation and Improvement ........................................................................ 29 6.Reference ............................................................................................................. 29 7.Appendix .............................................................................................................. 30
  • 3. 3 1.Introduction The purpose of this paper is to create, analyse and generate reliable scenario data for operational risk(OR) events in a bank and to provide efficient strategies regarding the improvement of operational risk management in order to assist in the prevention of future risks. Since the scarce of the essential data in these events with ‘high severity and low frequency’ when aggregating bank’s losses, scenario approach is most appropriate method to be able to fill the gaps of our total losses distribution, especially in the tail. Effective scenario modelling could help the financial institutions to understand how a particular operational risk event happened, what cause it, and what’s the possible impacts of it. Scenario sensitivity analysis could also help the decision maker to find the key factors when the loss occurs and inspire them to generate most efficient controls to prevent their institutions from future losses. At this paper, we focus on modelling and sensitivity testing of two cases including asset misappropriation and cyber-attack since these two events donate huge contributions in loss distributions in a bank. Both of them have characteristics like high severity low frequency, which are obviously main targets of scenario analysis. Moreover, sensitivity analysis for these two scenarios and combined scenario also be used as the method to explore most sensitive and essential risk drivers. Next, cluster method is applied to adjust quantiles by grouping data into subsets of data regarding the severity of OR losses. Based on the result we have obtained; strategic options can be provided to managers in the future operational risk management as for these two OR events. 1.1 Research Objective As far as we know, there is still no standard method for scenario generation and aggregation since the existence of differences in various OR events and business environment. Hence, it’s meaningful to explore the more efficient process and methodology at this section aiming to support decision makers by showing the sensitive factors at scenario cases and estimating the sufficient and appropriate capital requirement for preventing the bank from future risks. Here, this research is to apply academic concepts and methodologies of operational risk management and assessment especially scenario approach into the realistic case in a bank. The result of this research can be directly used in banks as the models to analyse their operational losses from asset misappropriation and cyber-attack. Based on scenario approach and cluster method, the appropriate capital requirement can be calculated as operational losses in the following years. Of course, some additional conditions should be considered every year regarding the changes of external financial environment and internal business structure. We do believe that this research is applicable in current global financial circumstance and it could contribute on robustness of scenario modelling through solid considerations of details in this event and target organisation construction.
  • 4. 4 1.2 Literature Review Academics and practitioners have proposed various multiple-scenario analyses to treat uncertainties in the future of business organizations since the 1970s [14] . Since the external local and global environment are laden with uncertain changes, it is difficult to detect potential trends. Hence scenario analysis is worth by advocating the generations of alternative pictures of the external environment’s future[2] . There is no doubt that scenario analysis has increasing attractiveness to managers [3][4] . Generating scenarios has various methodologies which can be found in literature [4-10] . For instance, Ringland[10] illustrates that majority of companies she has surveyed apply approach named as Pierre Wack Intuitive Logics, which created by former Shell group planner Pierre Wack. This approach focuses on constructing a comprehensible and credible set of situations of the forthcoming to test business plans or projects as a ‘wind tunnel’ by the encouragement of public debate or improvement of coherence. During the past few decades, the thinking that Shell used to deal with scenarios has spread out to other organizations and institutions such as SRI and GBN [10] . Later, this Shell approach and Godet’s approach are compared by Barbieri Masini and Medina Vasquez [13] . Ringland[10] also introduces other organizations and their methods constructing scenarios including ‘Battelle Institute (BASICS), the Copenhagen Institute for Future Studies (the futures game), the European Commission (the Shaping Factors–Shaping Actors), the French School (Godet approach: MICMAC), the Futures Group (the Fundamental Planning Method), Global Business Network (scenario development by using Peter Schwartz’s methodology), Northeast Consulting Resources (the Future Mapping Method) and Stanford Research Institute (Scenario-Based Strategy Development)’. In this paper, scenario process is adjusted based on bank structure, target events, and all above the previous scenario approaches experiences. 1.3 Research Procedure The research process is based on the basic scenario process as following steps[2][11][12] : Step 1: Identify focal issues for our bank Step 2: Main forces in the local circumstance and internal and external business environment Step 3: Driving key risk drivers and forces Step 4: Ranking factors by uncertainty and importance Step 5: Drawing scenarios flowchart in reasonable and logical way Step 6: Materializing the scenarios and aggregating scenarios Step 7: Sensitivity analysis Step 8: Cluster method to generate Step 9: Implications for strategy Step 10: Discuss the strategic options Step 11: Settle the implementation plan
  • 5. 5 The objective is to observe and analyse sensitivities of scenario cases based on suitable assumptions summarized from empirical evidence. The Swiss Cheese Model can be used to build scenario modelling after finding each events’ exposures, occurrences, and impacts. Through Monto Carlo method, the loss distributions can be generated during a year, and combined scenario loss distribution can be obtained through aggregation technique as the benchmarking of capital requirement. In this paper, two individual scenarios and one combined scenario distributions are generated for OR events asset misappropriation and data loss from cyber-attack. After inputting the necessary parameters based on bank’s information and experts’ opinions, Monte Carlos simulation is used to generate the VaR in each scenario. Next, VaR quantiles can be correct by cluster methodology to produce more suitable VaR quantiles based on the severity of OR losses. Decision makers can cite this research result as reliable and essential suggestions for operational risk management for their bank. 2.Scenarios Generation 2.1 Scenario I – Asset Misappropriation 2.1.1 Asset Misappropriation definition Asset misappropriation fraud is the asset lost if people who are entrusted to manage the assets of organization steal from it. This fraud behavior usually happens due to third parties or employees in an organization abuse their position to obtain access for stealing cash, cash equivalents, company data or intellectual property, which are vital for business running for an organization. Hence, this type operational risk should be modelled and analysed appropriately, especially under the case that extremely scarce of real data due to privacy of this issue and stigma of organization and negative impact of public image. This type of internal fraud can attribute to company directors, or its employees, or anyone else entrusted to hold and manage the assets and interests of an organization. Modelling, analysing, and discovering the most efficient scenario methodology is the main purpose of this paper in order to obtain a deeper understanding of this kind of fraud and provide realistic solving methods to avoid, stop and remedy this kind of issues. 2.1.2 Scenario Explanation and Assumptions Normally, asset misappropriation fraud can be the fraudulent behavior including: i. Embezzlement where accounts have been falsified or fake invoices have been made. ii. Deception by employees inside bank, false expense statements iii. Payment frauds where payrolls have been fictive or diverted, or inexistent clients or employees have been created. iv. Data theft v. Intellectual property stealing
  • 6. 6 In this scenario, the target object is the asset misappropriation within a medium size bank branch. Based on bank’s basic information and structure, some reasonable assumptions can be proposed at this stage as follows. • The most possible assets types in this bank can be stolen cover credit notes, vouchers, company data and intellectual property. • Bank has 2000 employees, and we could simplifier all staff into 5 different types positions including head of a bank and vice-presidents (20) with 10%, managers and directors (180) with 10%, senior analyst (600) with 5%, junior analyst (1200) with 5% according to value of access they hold in a bank. • Generally, the average probability of internal fraud happens inside bank which is 5%. Based on the level of processes and internal systems and controls, this probability can move on or down. It is slightly different for criminal probability in different levels such as the head of a bank and vice-presidents with 10% criminal probability, managers and directors with 10%, senior analyst with 5%, junior analyst with 5% according to value of access they hold in a bank. • The amount of asset can be stolen are different with various positions and it can be measured as a random process which follows normal distributions with different mean and (variance). For instance, head of a bank and vice-presidents steal around 1000-unit asset with variance (300), managers and directors may access about 100-unit with variance (30), senior associates can control nearly 20-unit with variance (6), and junior analyst only could obtain near 10-unit items with variance (3). • If employees what to misappropriate bank’s asset under their authority, they could directly access certain volume such as head of a bank and vice-presidents (level 4) could access 100% amount of asset, managers and directors (level 3) can control 90%, senior analyst (level 2) could approach 75%, and junior analyst (level 1) can access 50% according to number of entrances they hold in a bank. • if an employee wants to embezzle bank assets, this employee needs permission from his or her superiors to complete this fraudulent behaviour. According to experts within this bank, the possibilities that superiors are cheated successfully through fake documents with probability 50% that junior analyst obtains permit from their managers, similarly with probability 25% managers and directors could fraud successfully, and with probability 10% that head and vice-presidents steal assets from bank. • Regarding to the level of employees, the severity of this issue can be measured with a bank and vice-presidents ×1,728, managers and directors ×1.44, senior analyst ×1.2, and junior analyst ×1. Once this happens, banks should adapt immediate reactions and report it into action fraud. Since if fraudsters are not tackled, these opportunistic one-off frauds can become systemic and spread out within bank and fraudsters may think their behaviors are acceptable, which forms a negative company culture of theft and fraud. 2.1.3 Asset Misappropriation Flowchart
  • 7. 7 In this scenario, the most possible missed at our bank under asset misappropriation can be divide into four types such as credit notes, vouchers, bank data and intellectual property. All asset misappropriation can attribute to two isolated cases involving expense fiddling or an employee lying about his or her qualifications to get a job. In this case, different types of employees’ positions are considered as different occurrences which are easy to calculate the total loss based on their level of access and value of assets they could obtain. At the end, the impact can be used to calculate the total loss as the following formula. Here, we measure reputation loss based on severity of this event. 𝑳𝒐𝒔𝒔 = 𝑽𝒍𝒐𝒔𝒔 ∗ 𝑽 𝒂𝒎𝒐𝒖𝒏𝒕 ∗ 𝑺𝒆𝒗𝒆𝒓𝒊𝒕𝒚 After analysing exposure, occurrence and impact of asset misappropriation, we could use the Swiss Cheese Model (Cumulative Act Effect) to apply preventative (P), detective (D), and corrective (C) controls to reduce the possibility of this issue happens, control the effect of this event, and mitigate the consequences of this event. Here, different controls can be initialized as the quantitative values according to the expert’s suggestions and historical data as following: • P1: Vet employees by CV and references could reduce initial criminal probability • P2 - Implement a whistleblowing policy • P3 - Impose clear segregation of duties • P4 - Control access to buildings and systems • D1 - Checking invoices and related documents • D2: Internal audit could detect this event with probability 98%. • C1: The insurance proportions are different for various level of employees such as a bank and vice-presidents 0%, managers and directors 70%, senior analyst 50%, and junior analyst 0%. • C2: Tackle relevant employees could reduce the severity of this issue Expusure Credit Notes Vouchers Bank Data Intellectual property Occurrence Head and Vice- presidents Managers and Directors Senior Associate Junior Analyst Impact Value of loss Amount of loss Reputation loss
  • 8. 8 2.1.4 Result Let’s apply Monte Carlo to simulate this scenario in order to obtain reliable data to analyse this event. For making sure the accuracy of the result, this process is repeated for 10000 times, which shows more reasonable and realistic results compared with 2000 times and 5000 times. Inputting all the parametrises and using the above arithmetic to get the following result of VaR ($): Plot 1: Simulation Result of Scenario I – Asset Misappropriation By trying to apply different distribution types to fit our data, we find that Generalized Extreme Value fits data very well, and it makes senses since asset misappropriation can be treated as the extreme events. By Extreme Value Theorem (EVT), Generalized Extreme Value (GEV) distribution P1: Vet employees by CV and references P2: Implement a whistleblowing policy P3: Control access to buildings and systems P4: Impose clear segregation of duties Scenario: Asset Misappropriation D1: Checking invoices and related documents D2: Internal Audit C1: Insurance and backup C2: Tackle relevant employees 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28
  • 9. 9 is a normal way to measure tail loss, especially for scenario case. From the simulation result, we can find that the overall VaR distribution is roughly a lognormal distribution, which might fit reality. We can treat it as an acceptable result. Estimate values for GEV distribution’s parameters, mean, and variance as follows: Log likelihood Mean Variance k sigma mu -112685 44682.1 Inf 0.657664 11172.5 17407.7 From above figure, one important characteristic of asset misappropriation is that once it happens and will course large loss for a bank. Although the trust between bank and employees is essential, some strategies ought to be adapted to stop this kind of issues at the very beginning to make sure it won’t make a huge impact for bank. Generalized Extreme Value Fitting is the most appropriate fitting method in this case. Obviously, this figures can be treated as Lognormal distribution, which makes sense in real life. 2.2 Scenario II – Data loss by Cyber Attack 2.2.1 Significance of exploring data loss by cyber attack Cyber-attacks are advanced persistent menaces, which target company secrets in order to can cost companies a huge amount of money loss and could even put them out of business. Therefore, it’s valuable to model and analyse the loss caused by cyber-attacks. Normally, hackers infiltrate an institution’s system out of one of two aims: cyber espionage or data sabotage. In this scenario, data sabotage is highlighted especially data loss caused by hacker’s infiltrate at bank. The emphasis of this scenario is to simulate how hackers insinuate into bank’s network system and destroy essential data, and what detections a bank could apply to protect their data and minimize losses. 2.2.2 scenario analysis flowchart Assumptions: • The total volume of data at this bank is 10000 units • There are three firewalls at this bank with different security levels, data allocations, and data significance. • There are only two types of data including client’s information (50%) and management information (50%). Usually, bank has backup for all clients’ information, but sometimes they may forget to record some clients’ information because of omitting of fulfill in backup storage or negligence of related staff. Majority of management information may not be copied at backup. • Network engineers check the whole system once an hour, however, frequency of checking can be recognized as the ability of engineers, which means that more frequent of checking more strong capability of an engineer is. At here, it can be supposed that hackers almost surely can be found if they infiltrate at the same time that engineers check system.
  • 10. 10 2.2.3 Scenario process Based on assumptions of this scenario, Monte Carlo technique is applied to simulate cyber-attacks during a year and generate data in order to compute VaR (Value at Risk) and find the distribution of loss. For making sure the accuracy of this model, Monte Carlo was repeated 10000 times. Let’s start with a hacker tries to infiltrate bank’s system and hacker needs to pass three firewalls with different security levels, data value, and data distributions as follows. a. Hackers need to spend 5 minutes to infiltrate the first firewall and obtain 5% data valued 10 dollars per units, however, each hackers could pass first firewall with probability 50%. b. Hackers need to spend 15 minutes to infiltrate the second firewall and obtain 10% data valued 20 dollars per units, and each hacker could pass the second firewall with probability 25%. c. Hackers need to spend 45 minutes to infiltrate the third firewall and obtain 85% data valued 50 dollars per units, however each hacker could pass first firewall with probability 5%. After passing three firewalls, a hacker could obtain 5% data per minute for downloading it or destroying it. Once engineers check the system, hacker stops destroying data immediately. However, the data has been destroyed which can’t recover immediately, which will cause direct loss of bank. Hence, the loss can be calculated by timing time to detect (Time), data value (Vadata), and data volume (Voldata). 𝑳𝒐𝒔𝒔 = 𝑻𝒊𝒎𝒆 × 𝑽𝒂 𝒅𝒂𝒕𝒂× 𝑽𝒐𝒍 𝒅𝒂𝒕𝒂 2.2.4 Result Data loss under Cyber-attacks Exposure Client’s Information Management information Impact PC.1 Firewall 1: 50% pass, 5% data vol Scenario: Cyber-attacks D.C.1 Engineers Value of data Volume of data Time to detect PC.2 Firewall 2: 25% pass,10% data vol PC.3 Firewall 3: 5% pass, 85% data D.C.2 Backup
  • 11. 11 By running Monte Carlo method through MatLab, VaR values are computed for different quantiles, which is meaningful to provide scenario data in order to combine it with internal loss data, external loss data for different business lines at bank. Then broad operational loss at bank can be calculated. Plot 2: Simulation Result of Scenario II – data loss by cyber attack After trying Lognormal, Generalized Lognormal, and Generalized Extreme Value (GEV) distributions to fit our data, GEV performs well in this cyber-attack scenario. The following result shows the fitting of GEV distribution for our scenario. From the simulation result, we can find that the overall VaR distribution is roughly a lognormal distribution, which might fit reality. We can treat it as an acceptable result. Followings are the value for parameters for fitting GEV distributions: Log likelihood Mean Variance k sigma mu -103520 32427.5 6.81508e+07 -0.0122104 6538.51 28731.5 2.3 Aggregated Scenario 2.3.1 Meaning of Combination of Two Scenarios Applying our scenario data with an aim at incorporation into capital, aggregating losses of these different scenarios is the key part for obtaining bank’s total operational losses. In general, all 80 (10 event types X 8 business lines) operational risk categories would be measured. The first step is to consider different combinations of various scenarios by using dependency graph or scenario correlation matrix. At this paper, the aggregation of these two scenarios is considered by using var-cov matrix method since asset misappropriation and cyber-attack are the key operational risk 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR 26932.00 31143.42 36216.00 48334.67 59349.45 76068.35
  • 12. 12 events. The objective is to explore the relationship between total loss distribution and two individual loss distribution through applying scenario aggregation methodology. By focusing on key risk exposures and assessing the dependencies between scenarios, the regulatory capital of both events can be calculated to meet requirement of preventing our bank from operational risk losses. 2.3.2 Dependency analysis The interaction part of these two scenarios is the same object bank data. Considering bank data lost by cyber-attack, this may be caused by the both external and internal fraudsters. For instance, some internal employees may sell internal access of essential data to external fraudsters to steal company assets. As for specifically interacted terms, two pairs are found as highly including dependent potential Criminal in Scenario 1 with checking frequency in scenario 2, and insurance and backup in scenario 1 with backup in scenario 2. As for other elements in both scenarios, they can be dealt as identically independent, since the correlations between them can be ignored out of low dependent or independent relationships. For our aggregated scenario, the connection of the individual scenario is the correlated parameters. From the previous parameters discussed above, it shows that the correlated parameter is following. Scenario 1 Scenario 2 Correlation A Probability of Potential “Criminal” in P1 Checking Frequency High B Insurance and backup proportion in C1 Backup Proportion Median For pair A, the probability of potential criminal reflects the overall quality level of the employees, while checking frequency reflects the technology level of the engineer. Both of these reflect the quality of institution’s employee. For pair B, the proportion of insurance and backup in scenario 1 include the backup of data. Data also could be important asset which needs to be protected. So the backup of data is included in both scenarios. Once the data in scenario 2 recover, part of C1 also should be recovered (or insured). 2.3.3 Aggregation Method From above analysis, two scenarios can be dealt with correlation matrix since they have some main factors which are correlated with each other. However, considering the several parameters used in two scenarios, only a few of them are correlated. The correlated relationship is not that obvious. Here the correlated parameter of two scenarios can be simply settled as 0.3. By var-cov matrix method, the following formula is used to calculate the aggregated loss. 𝑋L ∙ Σ ∙ 𝑋 Where 𝑋 is the vector of the loss, Σ is the correlated matrix. Then, we adjust this for two- scenarios situation. The formula is in the form of following.
  • 13. 13 𝐿PQPRS = 𝑆U 𝑆V 𝜌UU 𝜌UV 𝜌VU 𝜌VV 𝑆U 𝑆V U V This formula is given in the ‘’Milliman Research Report: Aggregation of Risks and Allocation of Capital”.[15] Where 𝑆U and 𝑆V are the loss from Scenario 1 and Scenario 2 respectively, and 𝜌UV = 𝜌VU = 0.3 resulting from experts’ opinions or historical loss distributions. 𝜌UU = 𝜌VV = 1 which is because every random variable is completely correlated to itself. 2.3.4 Results Applying Monte Carlo methodology for above-aggregated scenario, VaR can be generated after running 10000 times M-C methods. The algorithm is similar to scenario 1; similarly, GEV fits our data well in this section since it’s still the combination of extreme event losses. Plot 3: Simulation Result of Combined Scenarios Also, GEV performs well in this scenario. Parameters, mean, and variance for GEV distribution are estimated as follows: Log likelihood Mean Variance k sigma mu -114376 57520.1 4.59793e+09 0.423088 14972.3 38246.2 Our finding is the following. Comparing three histogram plot, to get the distribution of aggregated scenario, the distribution of scenario 1 shift to right a little by being affected by the distribution of scenario 2. 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR 33734.71 43380.94 63110.38 140655.30 235615.27 333344.57
  • 14. 14 3.Sensitivity Analysis Some change on the necessary control and different parametric can be changed to observe the impact on VaR. Then the importance of these control methods and parametric can be prioritised depending on assorted VaR, which might help the manager to have a good control on the risk of relative scenarios. In order to have a good version to the real situation of loss, here we recalculate 25%VaR, 50%VaR, 75%VaR, 95%VaR, 99%VaR and 99.9%VaR to compare and mainly focus on 50%VaR and 99.9%VaR This could help decision makers to understand the expected and unexpected loss level. In each table, the gray line would be the original values setting. 3.1 Sensitivity analysis for Scenario I 3.1.1 P1 - Vet employees by CV and references The “Vet employees by CV and references” is a control method during the recruitment process and employee training. Here we set a probability to represent the probability of every employee might want to have such “criminal” behavior. Combined with the overall staff number, the number of potential “criminal” are binomial distribution. Through strict recruitment and career training, the possibility of potential ‘theft’ could decrease. Here we adjust this value and get the following table. Probability of Potential “Criminal” VaR Analyst Associate Directors Vice- presidents 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR 0.05 0.05 0.025 0.025 5331.20 9679.31 22463.20 74212.06 187905.30 251974.99 0.1 0.1 0.05 0.05 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 0.2 0.2 0.1 0.1 31358.74 47146.27 75410.93 182585.70 268775.28 432655.92 0.3 0.3 0.15 0.15 49975.73 73195.94 108119.85 227406.16 317244.38 432344.15 0.1 0.1 0.05 0.05 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 0.05 0.1 0.05 0.05 11915.89 20135.48 40328.27 124702.50 205327.38 267028.34 0.1 0.05 0.05 0.05 12742.31 21427.96 41775.47 117471.95 216375.04 288731.09 0.1 0.1 0.025 0.05 11695.91 19724.78 40345.43 122966.52 204077.74 347431.20 0.1 0.1 0.05 0.025 10769.72 15193.14 26137.63 72223.46 185688.72 272214.32 From the first set of the table, it can be detected that higher probability of potential “criminal” should lead to more loss. For the second set of the table, following plot can illustrate the changes.
  • 15. 15 If only one level is strictly controlled, the loss decreases in the different degree. Both on expected loss and extreme loss point of view, the conclusion is obvious. Strictly control the “Head and Vice- presidents” level from asset misappropriation is the most efficient way to control the loss. 3.1.2 P2 - Implement a whistleblowing policy In “Implement a whistleblowing policy” control, it can be assumed that if there is a whistleblowing policy, the whistleblowing could only happen when the employee has access to the relative asset. This should make sense because only other employee who have the same access level can disclose the “criminal”. To make the model clear, setting the possibility of being disclosed by the same level employee is 0.5. Once being disclosed, the loss should be 0. Then the loss can be compared between with and without this control. Disclosed probability 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR No Control 19619.21 29618.54 53995.88 178867.91 246059.11 371722.37 0.25 16779.74 25913.66 47422.27 157204.22 231346.46 338667.38 0.5 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 0.75 10991.09 18582.81 36765.00 82775.73 179818.56 256146.46 From the table, it is obvious that the correlation between disclosed probability and loss is negative. This also makes sense in management, which is whistleblowing more, loss lower. 3.1.3 P3 - Impose clear segregation of duties In corporation management, segregation of duties is always necessary. Considering security factor, the employee in the certain department should have no access to the asset which have no relation to his duty. In this model, if this “Impose clear segregation of duties” exist, every employee only has access to 80% of all the asset at his access level. However, the top level is not affected by this control condition. No Level Control Control Junior Analyst Control Senior Associate Control Managers & Directors Control Head & Vice- presidents 99.9%VaR 302527.28 267028.34 288731.09 347431.20 272214.32 50%VaR 22268.45 20135.48 21427.96 19724.78 15193.14 0.00 80000.00 160000.00 240000.00 320000.00 400000.00 10000.00 13000.00 16000.00 19000.00 22000.00 25000.00
  • 16. 16 Trans-department Asset 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR 0.4 12908.76 21278.66 41080.52 117088.34 210221.36 301523.66 0.6 13351.52 21782.61 41466.08 117592.98 210578.02 302022.50 0.8 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 No Control 14222.78 22704.57 42397.68 118984.80 211220.30 302998.81 From the plot, having control on trans-department access is not an effective way for prevent huge loss. And it has some effects on controlling the expected loss. 3.1.4 P4 - Control access to buildings and systems Controlling access is a common way both for corporation management and security in modern business management. In this model, all employees can be separated into 4 level. The higher level staff have more access and the value of the asset he accesses to is higher. High-level staff’s access covers low-level staff’s. However, if the potential “criminal” staff target on the higher level assets which he has no access to. For example, to do this, the staff need to get the permit or signature from higher level. There is certain possibility to get higher access. Considering the universality of this control, here it is treated as a necessary way for protecting asset and will not assume this control disappear. However, the possibilities of getting higher access are adjusted to see the VaR changing. Lower Access Probability VaR 1->2 2->3 3->4 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR 0.5 0.25 0.1 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 0.25 0.25 0.1 12751.61 21241.60 40958.12 117211.00 209653.17 301148.42 0.5 0.125 0.1 13332.93 21823.25 41435.27 117963.40 210747.79 302040.43 0.5 0.25 0.05 13672.26 22050.67 41792.86 118314.18 210907.22 302527.28 0.4 0.6 0.8 No Control 99.9%VaR 301523.66 302022.50 302527.28 302998.81 50%VaR 21278.66 21782.61 22268.45 22704.57 300500.00 301000.00 301500.00 302000.00 302500.00 303000.00 303500.00 20500.00 21000.00 21500.00 22000.00 22500.00 23000.00
  • 17. 17 From the plot, it is easy to observe that part which should strictly control is bottom cross-level. Strictly controlling this could bring down the loss effectively. In other words, the process of cross- level authorization should be designed well, especially on the bottom level. Besides, authorization to the top level is not that important which could not reduce too much loss. 2.1.5 D1 - Checking invoices and related documents Once asset misappropriation happens, checking invoices and related documents also could prevent loss. For example, the daily or momentary review could find out the unusual situation. Once discovery, the relative account can be locked to prevent loss. The assumption is made that asset misappropriation for all cross-level misappropriation might be checked. The probability is set as 0.5 if asset misappropriation could not be prevented due to “checking invoices and related documents” control. If this control is not being used or failure, the increasing of VaR can be showed in this case. Prevent probability 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR 0.25 9323.02 14158.38 23769.44 100787.35 199441.42 293501.83 0.5 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 0.75 18213.83 30380.83 60060.16 144409.18 225844.52 323583.66 No Control 22558.54 38510.17 78060.97 170365.08 250599.46 343463.46 The prevent probability higher, the loss higher. It can be described as higher supervision, lower loss. Or, if lighter control is taken, which means that only check cross-level misappropriation is checked, which is from higher level to lower level, or from lower to higher. Two results can be compared as follows. Base Control 1->2 Control 2->3 Control 3->4 99.9%VaR 302527.28 301148.42 302040.43 302527.28 50%VaR 22268.45 21241.60 21823.25 22050.67 300000.00 300500.00 301000.00 301500.00 302000.00 302500.00 303000.00 20600.00 20800.00 21000.00 21200.00 21400.00 21600.00 21800.00 22000.00 22200.00 22400.00
  • 18. 18 Check Direction 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR Both 19619.21 29618.54 53995.88 178867.91 246059.11 371722.37 Low->High 26331.34 45449.64 92723.68 201669.75 281214.65 397574.95 High->Low 22251.52 32529.17 56893.23 181279.66 248473.46 374311.24 Here it can be saw that checking invoices which from high level to low level has the similar loss amount with checking both direction. In other words, checking high to low is more effective and check low to high is not that important. This might because many loss happens when the high level staff misappropriate low level asset. Employee 2.1.6 D2 - Internal Audit Different from the previous control, internal audit only occurs at fixed time point. So this control cannot prevent all the loss happen. However, it can prevent some loss happen or reduce some loss. Here setting that 2% of loss can be reduced. Prevent Loss 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR No Control 14064.39 22722.91 42805.75 120798.74 215211.45 308701.30 0.98 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 0.9 12657.95 20450.62 38525.18 108718.86 193690.30 277831.17 0.8 11251.51 18178.33 34244.60 96638.99 172169.16 246961.04 0.7 9845.07 15906.04 29964.03 84559.12 150648.01 216090.91 This is also a basic parameter. The higher degree of strict for internal audit lead to lower loss. 2.1.7 C1 - Insurance and backup Both Low->High High->Low 99.9%VaR 371722.37 397574.95 374311.24 50%VaR 29618.54 45449.64 32529.17 355000.00 360000.00 365000.00 370000.00 375000.00 380000.00 385000.00 390000.00 395000.00 400000.00 0.00 5000.00 10000.00 15000.00 20000.00 25000.00 30000.00 35000.00 40000.00 45000.00 50000.00
  • 19. 19 Once loss from misappropriation happens, insurance could be a good way to control the loss. Or, some asset such as important data can be recovered if having backup. Here it can be settled that only asset in the second and third level have insurance in the proportion of 70% and 50%. The bottom level asset has low value and are cost-efficient for insurance. The top level asset only assesses to top level staff and have high level of security. So still no insurance for this level. However, the proportion of insurance can be altered to find a better way for reducing VaR. Insurance Proportion VaR Level1 Level2 Level3 Level4 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR No Control 23247.35 33175.01 52407.24 127692.78 221607.36 315079.68 0 0 0.7 0.5 15482.45 19981.17 29676.13 67843.51 114056.54 160163.33 0 0.7 0.5 0 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 0.7 0.5 0 0 16836.87 26886.46 46009.92 121173.33 215548.58 308304.32 0.3 0.3 0.3 0.3 16273.14 23222.51 36685.07 89384.95 155125.15 220555.78 It ought to be assumed that the overall percentage of insurance is fixed. By comparing the different focus point for the insurance, it shows that the expected loss is low when insurance focus on the top level asset. This make sense because top level has the highest value. And putting insurance on average in different level should also effectively reduce loss. 2.1.8 C2 - Tackle relevant employees After asset misappropriation occurs, tackle relevant employees. Dismissal or firing bills might be the most common way to deal with these. Once need to tackle relevant employees and dismissal him, the loss should surpass the only asset losing. Plus, higher level’s dismissal should have larger impact. Therefore, the severity index can be set for different level to show the extra loss, such as loss of valuable employees. Severity Index VaR No Control Insure High Insure Median Insure Low Average Insure 99.9%VaR 315079.68 160163.33 302527.28 308304.32 220555.78 50%VaR 33175.01 19981.17 22268.45 26886.46 23222.51 0.00 50000.00 100000.00 150000.00 200000.00 250000.00 300000.00 350000.00 0.00 5000.00 10000.00 15000.00 20000.00 25000.00 30000.00 35000.00
  • 20. 20 Level1 Level2 Level3 Level4 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR 1 1 1 1 10801.93 15835.07 27158.07 71314.39 125321.62 178246.62 1 1.2 1.44 1.728 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 1 1.4 1.96 2.744 17555.94 30718.59 62141.51 183129.61 330894.30 475413.99 1 1.6 2.56 4.096 22116.06 41355.50 88486.15 269109.48 489831.22 704930.20 This is also common parameter. More important the staff is, the higher loss is. 2.1.9 Which is the best control? Pick partly data from all above tables, we can only compare the VaR with or without certain control. In this way, the control method can be considered as the best efficiency. As the essential part of our model, control P1, P4 and C2 are retained, which are also unrealistic if deleting. Here is our result of removing control. Control 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR Origin 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 No P2 19619.21 29618.54 53995.88 178867.91 246059.11 371722.37 No P3 14222.78 22704.57 42397.68 118984.80 211220.30 302998.81 No D1 22558.54 38510.17 78060.97 170365.08 250599.46 343463.46 No D2 14064.39 22722.91 42805.75 120798.74 215211.45 308701.30 No C1 23247.35 33175.01 52407.24 127692.78 221607.36 315079.68 Once removing certain control, it indicates that such loss’ increase is large. This means that such control is effectively. From this plot, ’Checking invoices and related documents’ (D1) and ‘Insurance and backup’ (C1) are the most effective control to reduce the expected loss. ‘Implement a whistleblowing policy’ (P2) and ‘Checking invoices and related documents’ (D1) are effective to reduce the mass loss. ‘Internal Audit’ (D2) and ‘Impose clear segregation of duties’ (P3) function is not that obvious if another control is set. Origin No P2 No P3 No D1 No D2 No C1 99.9%VaR 302527.28 371722.37 302998.81 343463.46 308701.30 315079.68 50%VaR 22268.45 29618.54 22704.57 38510.17 22722.91 33175.01 80000.00 130000.00 180000.00 230000.00 280000.00 330000.00 380000.00 430000.00 10000.00 15000.00 20000.00 25000.00 30000.00 35000.00 40000.00
  • 21. 21 3.2 Sensitivity analysis for Scenario II It’s important to explore and analyse how different methods could reduce and protect bank’s data from cyber-attacks. At this scenario, three main factors can be recognized to protect our data and recover loss data such as the ability of engineers, solidity of each firewalls, and backup of data. The purpose is to compare and draw a reliable conclusion to see which is the most significant factor, which strategy could be used as most efficient way to react and prevent data sabotage. 3.2.1 Analyzing importance of ability of engineers As stated above, frequency of checking system is the way we measure the capability of engineers at this scenario. Since increasing frequency of checking could reduce average time to detect infiltrating. Therefore, different results of VaR by adjusting different values of frequency could show us how sensitive between ability of engineers and final loss dollars. Check Freq 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR once 70 mins 27820.00 32642.00 38692.00 57662.00 70757.35 88987.40 once 60 mins 26932.00 31143.42 36216.00 48334.67 59349.45 76068.35 once 50 mins 25376.00 29248.00 33388.00 39912.00 44763.71 50326.59 once 40 mins 23306.00 26910.00 30740.00 36652.00 40856.00 46206.00 once 30 mins 20182.00 23512.00 27060.00 32304.00 36158.00 40582.00 From the results showed in the graph above, we find there is a positive relationship between ability of engineers and data loss. Especial, improving capability of engineers is more efficient by considering more quantiles of value at risk. There is big change between once 70 mins, once 60 mins and once 50 mins, it’s efficient and worthy to improve the level of network engineers from level (once 60 mins) to level (once 50 mins) by considering costs of network engineers. Of course bank could choose most professional engineers to protect their important data if they think it’s 10000.00 20000.00 30000.00 40000.00 50000.00 60000.00 70000.00 80000.00 90000.00 100000.00 20% 30% 40% 50% 60% 70% 80% 90% 100% Imapct of Time to detect on VaR once 70 mins once 60 mins once 50 mins once 40 mins once 30 mins
  • 22. 22 necessary based on the importance of their data. The largest change is 70081.88 by changing frequency from once 60 mins to once 30 mins. 3.2.2 Analyzing solidity of each firewall Firewalls are most significant and usual method to prevent bank’s data from majority data sabotage behaviors. At this part, we want to show how essential of each firewall by decreasing probability of passing each firewall as the standard of improving its security levels. 50% VaR Firewall 1 Firewall 2 Firewall 3 (50%, 25% , 5%) 31143.42 31143.42 31143.42 reduced by 10% 27972.00 29394.00 31058.00 reduced by 20% 24786.00 27640.00 30978.00 reduced by 30% 21704.00 25772.00 30916.00 99.9% VaR Firewall 1 Firewall 2 Firewall 3 (50%, 25% , 5%) 76068.35 76068.35 76068.35 reduced by 10% 69854.40 74094.82 74053.13 reduced by 20% 65736.68 70948.92 73887.35 reduced by 30% 61185.86 65778.22 69987.35 10000.00 15000.00 20000.00 25000.00 30000.00 35000.00 (50%, 25% , 5%) reduced by 10% reduced by 20% reduced by 30% Improving security of each firewalls with 50% VaR Firewall 1 Firewall 2 Firewall 3
  • 23. 23 From graphs above, it illustrates that the security level is very sensitive for the result of VaR, the largest change is 69987.35 by improving security level of firewall 1. Therefore, conclusion is made that firewalls are essential to protect bank’s data. 3.2.3 Impact of percentage of total data in backup on VaR Normally, bank could recover their loss data from their backup, however they couldn’t obtain all data from their database backup based on some staff miss operations. Therefore, it’s important to ensure a bank have all essential data backup in order to make sure business work well even in the worst case that they lose some essential data. At this part, the percentages of data in backup are changed in order to show changes of VaR and find a most efficient way to recover our data after data sabotage. % of data in backup 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR 80% 26932.00 31143.42 36216.00 48334.67 59349.45 76068.35 85% 25785.00 29853.25 34705.25 46282.36 56996.67 73124.86 90% 24662.00 28537.00 33178.50 44241.33 54723.11 70181.37 95% 23544.25 27252.50 31671.00 42219.02 52339.19 67237.88 50000.00 55000.00 60000.00 65000.00 70000.00 75000.00 80000.00 (50%, 25% , 5%) reduced by 10% reduced by 20% reduced by 30% Improving security of each firewalls with 99.9% VaR Firewall 1 Firewall 2 Firewall 3
  • 24. 24 From above chart, it shows a large changing if increasing percentage of backup of client’s information. Even though only the half of client’s information can be copied, and it normally can’t make backup of management information on time, it still makes huge impact on reducing VaR at different quantile levels. 3.2.4 Impact of different firewalls Changing the number of firewalls can be used to find a better way of building firewall. Above all, ‘3 firewalls’ is the initial condition of bank. What if bank reduce the number of firewalls to 2? At the same time, adjusting some parameters is necessary to fit the data. Comparing the results to find strategic options for bank’s network system. Before changing 3 Firewalls Structure After changing 2 Firewalls Structure Time of break the firework(min) 1 st firewall 5 1 st firewall 15 2 nd firewall 15 2 nd firewall 50 3 rd firewall 45 Probability of break the firework 1 st firewall 0.5 1 st firewall 0.2 2 nd firewall 0.25 2 nd firewall 0.04 3 rd firewall 0.05 Data volume proportion behind the firework 1 st firewall 0.05 1 st firewall 0.2 2 nd firewall 0.15 2 nd firewall 0.8 3 rd firewall 0.8 Data value behind the firework (unit value) ($) 1 st firewall 10 1 st firewall 17.5 2 nd firewall 20 2 nd firewall 50 3 rd firewall 50 After using the same algorithm, following result can be showed. 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR 20000.00 30000.00 40000.00 50000.00 60000.00 70000.00 80000.00 20% 30% 40% 50% 60% 70% 80% 90% 100% Impact of Percentage of data in backup on VaR 80% Backup 85% Backup 90% Backup 95% Backup
  • 25. 25 3 firewall 26876.00 31220.00 36354.65 48574.14 59106.87 74493.84 2 firewall 26880.00 31817.38 37716.00 46984.00 54560.53 64589.95 From this plot, a clear phenomenon can be saw. For lower expected loss, 2 firewall system if preferable, which has lower value at 50% VaR. As for lower mass loss, 3 firewall system is preferred. 3.3 Sensitivity Analysis for Aggregated Scenario Based on aggregated scenario generated before, the only independent parameter is the correlated parameter. Then this can be adjusted to explore the relationship between the total loss and two individual losses. The following adjustments have been finished in this part on the correlated parameters to see the change of VaR. In the following table, 0 means that no correlated between two scenarios. Correlation 25%VaR 50%VaR 75%VaR 95%VaR 99%VaR 99.9%VaR 0 30254.03 38285.72 55419.95 127869.93 219098.66 311947.77 0.3 33734.71 43380.94 63110.38 140655.30 235615.27 333344.57 0.7 37881.34 49363.13 72099.36 156081.73 255985.02 359899.80 1 40715.10 53411.87 78165.64 166717.43 270256.66 378595.63 20000.00 30000.00 40000.00 50000.00 60000.00 70000.00 80000.00 20% 30% 40% 50% 60% 70% 80% 90% 100% Impact of Different Firewall Structures on VaR 2 firewall 3 firewall
  • 26. 26 From the plot, it shows that higher stronger correlation gets higher VaR both in aspect of expected loss and extreme loss. The explanation might be this – once one of the scenario loss happen, it means that the probability of risk factor is relatively high. In this way, as the existence of correlation, higher risk factor also causes cause loss on another scenario. 4.Alternative Adjustment on Loss Measure Quantile H Next, introducing cluster method aims to improve the result of VaR, and this approach is worthy for generating new VaR quantiles based on severity, which enables one to combine expert opinion scenarios with quantitative operational risk data. This methodology was firstly proposed by Dr. Sovan Mitra in 2013 by using the key idea from machine learning. [12] 4.1 Introduction to Cluster Analysis To achieve scenario adjustment, clustering analysis can be applied to match severity magnitude. Clustering is a method of grouping data into subsets of data, which are also known as clusters. Moreover, K-means clusters analysis is one kind of unsupervised learning, which is one subject of machine learning. Unsupervised learning is a way to explore the common feature of data by a particular algorithm. K-means algorithm is a simple iterative clustering algorithm. It uses distance (e.g. Euclidean distance) as the similarity index to find a given data set of K classes. Each centre of class is obtained by the mean of all the value in such class. Each class is described as the clustering centre. 0.00 50000.00 100000.00 150000.00 200000.00 250000.00 300000.00 350000.00 400000.00 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 0.3 0.7 1
  • 27. 27 4.2 Application on Adjustment of Scenario Result The following is the basic steps of K-mean clusters algorithm. Step 1: Select K objects in the data space as the initial centre. Each object represents a cluster centre. Step 2: For every data objects in the sample, we calculate the Euclidean distance between it and the cluster centres. Then different data are grouped according to the nearest criterion and are divided into the corresponding classes of nearest cluster centres. Step 3: Update the cluster centre - the mean values of all the objects in each category are dealt as the cluster centre of the class. Then the value of the objective function can be computed. Step 4: Determining wether the cluster centre and the value of objective function are changed or not. If they both stay the same, output the results; if changed, then turn back to step 2. Using the above algorithm, the MC simulation result is used as the sample class. The VaRs’ intervals of the cluster centre of each interval are modified. The results are showed as following. For scenario 1 - asset misappropriation For scenario 2 - cyber attack 4.3 Important Meaning to Loss Measure Quantile The interval point of VaR (25%, 50%, 75%, 95%, 99%, 99.5%) is based on the empirical judgement. In common situation, these quantiles are fixed as a standard for operational risk modelling. However, setting these interval points cannot clearly reflect the feature of a different distribution. Cluster method introduces an effective way to reflect the feature of distribution in several intervals and the loss level on each interval at the same time. This is very important to improve the loss measure quantile. In our result, although the fixed interval points have change, the modified outcome can reflect average VaR level in 6 different intervals. It can also reflect relative relationship of individual interval among the overall distribution of loss. Unmodified 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR 13783.10 22268.45 41949.64 118382.76 210907.22 302527.28 Modified 48.1% 76.3% 92.4% 96.5% 99.8% 100.0% 21335.32 43380.22 84397.50 151063.49 257611.43 429427.17 Unmodified 25% VaR 50% VaR 75% VaR 95% VaR 99% VaR 99.9% VaR 26932.00 31143.42 36216.00 48334.67 59349.45 76068.35 Modified 31.6% 66.6% 88.7% 97.1% 99.7% 100.0% 28136.00 34181.89 41909.31 52582.55 66845.03 90931.40
  • 28. 28 5.Conclusion In conclusion, the loss distributions can be generated for scenarios asset misappropriation and cyber-attack and combined scenarios of both of them; based on our scenarios analysis, sensitivity analysis of scenarios is useful to assist us to derive most essential factors for operational risks as the basis of strategic suggestions to managers. 5.1 Discussion of strategic options At this part, the specific strategies are discussed separately for asset misappropriation (scenario 1) and cyber-attack (scenario 2) for our bank. In scenario 1, firstly, it illuminates that internal fraudsters within bank regarding asset misappropriation are from top two levels employees within bank covering the head of a bank and vice-presidents, managers or directors. Out of the abuse of their authority, they could easily access and occupy bank’s asset without supervision. Once this events happened, it almost surely will cause huge losses for bank. Therefore, we strongly suggest our bank to invoke third party as special fair asset management platform to record and check the high-level employees’ applications of their authority especially for assets of bank. Next, whistleblowing is also a highly efficient control to reduce OR losses in scenario 1. Based on our scenario data, it shows whistleblowing scheme within same level employees or between different levels employees donates huge contribution of operational risk management under this circumstance compared with other controls. Hence, whistleblowing should be spread out with certain bonus to help bank to create this scheme and form employee whistleblowing awareness. In scenario 2, cyber-attack is normally caused by external intended attack to bank’s information network system. Hence, we can think this as the battle between our information security engineers and hackers. It’s efficient if we decrease detection gap time of engineers from once 70 minutes to once 50 minutes; however, it has low effect if we try to reduce further from 50 minutes with high expenses. It may be caused ability of engineers from 50 minutes has exceeded the ability of majority hackers. As for firewalls, more firewalls can reduce the data losses of essential information and cause more losses of nonessential data. Since we measure the same level of our firewalls, we assume that firewalls will have stronger ability to prevent our network from hackers’ attacks. Then, results show that we may lose more core information in our bank and less loss of normal data under less number of firewalls compared with multi-complex firewalls. Based the type of information that bank want to protect, managers can change their strategies and adjust it if necessary. From dependency analysis in our combined scenario, the result proves that the quality of employees is key risk drivers of both scenarios; hence, it’s necessary to improve bank’s recruitment procedure and vet CV as well as references.
  • 29. 29 5.2 Limitation and Improvement In this paper, some essential parameters of our scenarios, we simply use the expert’s opinions and historical loss distributions which may result in cognition biases from the real market and predictions caused by the uncertainties of future business environment. Hence, the parameters in our scenarios should be assumed based on both internal and external experts as well as reasonable assumptions of future changes for local and global circumstances. If necessarily, we ought to be conservative on parameter assumptions for some sensitive factors. Moreover, it can be more flexible on changes of parameters; for instances, hackers’ ability should be adjusted more randomly and more unpredicted for simulating realistic cases. The advanced dependency structure can be applied here to attribute different risk drivers to scenarios. In this way, more appropriate correlation and variance matrix can be generated to combine two scenarios. 6.Reference [1] K. van der Heijden, Scenarios: The Art of Strategic Conversation, Wiley, Chichester, 1996. [2] T.J. Postma and F. Liebl, How to improve scenario analysis as a strategic management tool, Technological Forecasting & Social Change 72 (2005) 161–173 [3] P.J.H. Schoemaker, C.A.J.M. van der Heijden, Integrating scenarios into strategic planning at Royal Dutch/Shell, Plann. 
Rev. 20 (3) (1992) 41–48. 
 [4] K. van der Heijden, Scenarios: The Art of Strategic Conversation, Wiley, Chichester, 1996. [5] M. Godet, Scenarios and Strategic Management, Butterworth, London, 1987. 
 [6] W.R. Huss, A move toward scenario analysis, Int. J. Forecast. 4 (1988) 377–388. 
 [7] M.E. Porter, Competitive Advantage—Creating and Sustaining Superior Performance, Free Press, New York, 1985. 
 [8] P. Schwartz, The Art of the Long View: Planning for the Future in an Uncertain World, Doubleday Currency, New York, 
1991. 
 [9] U. von Reibnitz, Scenario Techniques, McGraw-Hill, Hamburg, 1988. 
 [10]G. Ringland, Scenario Planning: Managing for the Future, Wiley, Chichester, 1998. 
 [11]R.P. Bood, Th.J.B.M. Postma, Strategic learning with scenarios, Eur. Manag. J. 15 (6) (1997) 633–647. 
 [12]S. Mitar, Scenario Generation for Operational Risk, Intelligent Systems In Accounting, Finance And Management, 20(2013), 163–187.
  • 30. 30 [13]E. Barbieri Masini, J. Medina Vasquez, Scenarios as seen from a human and social perspective, Technol. Forecast. Soc. Change 65 (1) (2000) 49–66. 
 [14]K. van der Heijden, R. Bradfield, G. Burt, G. Cairns, G. Wright, The Sixth Sense: Accelerating Organizational Learning with Scenarios, Wiley, Chichester, 2002. 
 [15] J. Corrigan et al, Milliman Reserch Report: Aggregation of Risks and Allocation of Capital, 2009. 7.Appendix 1. Codes for Scenario I based on Matlab clear;close all;clc rand('state',0); % fix random number, good for sensitivity randn('seed',0); % fix random number H=2000; % total employees Hlevel=[1200 600 180 20]; % employees level number ptheft=[.1 .1 .05 .05]; % criminal probability muthe=[10 20 100 1000]; % asset mu sigmathe=[3 6 30 300]; % asset sigma percentage=[.5 .75 .9]; % volume of asset in different level itemrange=[15 35 65 100]; % level setting whithe=0.5; % whistleblowing probability segthe=0.2; % cross-deppartment probability minuamou=0.8; % proportion of access to cross-asset pplevel=[.5 .25 .1]; % cross-level probability severi=[1 1.2 1.44 1.728]; % severity Sevinteadu=0.98; % internal audit insran=[0 .7 .5 0]; % insurance proportion N=10000; for i=1:N % P1 - Vet employees by CV and references ntheft(1)=binornd(Hlevel(1),ptheft(1),1,1); ntheft(2)=binornd(Hlevel(2),ptheft(2),1,1); ntheft(3)=binornd(Hlevel(3),ptheft(3),1,1); ntheft(4)=binornd(Hlevel(4),ptheft(4),1,1); for ii=1:4 sumtiWU(ii)=0;sumtiP2(ii)=0;sumtiD1(ii)=0;sumtiQU(ii)=0; if ntheft(ii)==0 % amou(ii)=0; jthe(ii)=0; sxx(ii)=0; ppp(ii)=0; break;
  • 31. 31 end for j=1:ntheft(ii) % decide amount amou(ii)=ceil(normrnd(muthe(ii),sigmathe(ii))); % decide values xx=rand(); if xx<=percentage(1) sxx(ii)=rand()*10; elseif xx<=percentage(2) sxx(ii)=rand()*20+10; elseif xx<=percentage(3) sxx(ii)=rand()*30+30; else sxx(ii)=rand()*40+60; end % decide levels if sxx(ii)<=itemrange(1) jthe(ii)=1; elseif sxx(ii)<=itemrange(2) jthe(ii)=2; elseif sxx(ii)<=itemrange(3) jthe(ii)=3; else jthe(ii)=4; end QUQU=1; % P2 - Implement a whistleblowing policy if (ii==jthe(ii)) && (rand()<=whithe) QUQU=0; end % P3 - Impose clear segregation of duties if (ii~=4)&&(rand()<=segthe) amou(ii)=ceil(amou(ii)*minuamou); end % P4 - Control access to buildings and systems if sxx(ii)<=itemrange(1) ppp(ii)=1; elseif sxx(ii)<=itemrange(2) ppp(ii)=1*(ii>=2)+(ii==1)*(rand()<pplevel(1)); elseif sxx(ii)<=itemrange(3) ppp(ii)=1*(ii>=3)+(ii==1)*(rand()<pplevel(1))*(rand()<pplevel(2))+( ii==2)*(rand()<pplevel(2)); else ppp(ii)=(ii==4)+(ii==1)*(rand()<pplevel(1))*(rand()<pplevel(2))*(ra nd()<pplevel(3))+(ii==2)*(rand()<pplevel(2))*(rand()<pplevel(3))+(i i==3)*(rand()<pplevel(3)); end DDD=1; % D1 - Checking invoices and related documents if ii~=jthe(ii) DDD=0.5; end % C1 - Insurance + C2 - Tackle relevant employees sumtiQU(ii)=sumtiQU(ii)+amou(ii)*sxx(ii)*ppp(ii)*severi(ii)*(1- insran(ii))*DDD*QUQU;
  • 32. 32 end %D2 - Internal Audit sumtheQU(i)=sum(sumtiQU)*Sevinteadu; end end hist(sumtheQU,1000); % percentile selection of the convoluted distributions VARQU=prctile(sumtheQU,[25, 50, 75, 95, 99, 99.9] 2. Codes for Scenario II based on Matlab rand('state',0); randn('seed',0); H=100; % possible attack Efrequency=60; % Engineers check system once an hour amoutdata=10000; % assume there are 10000 nits of data fiwotime=[5 15 45]; % time used by hackers to pass each firewalls probattk=[.5 .25 .05];% probability of hackers pass each firewalls perdata=[.05 .1 .85]; % percentage of data hackers pass each firewall valdata=[10 20 50]; % dollars per unit of data percentpermin=.05; % data loss rate when hackers pass third firewall percentdata=.5; %the proportion of clients’ data backupdata=.8; % back up 80% of clients' data percentage=[.6 .9 .95 .975 .99]; N=10000; % times that Monte Carlo runs for ii=1:N vnlost(ii)=0; for i=1:H restime=rand()*Efrequency; if restime<fiwotime(1) srr=0;svv=0; elseif restime<fiwotime(2) srr=(rand()<probattk(1))*perdata(1); svv=srr*valdata(1); elseif restime<fiwotime(3) srr=(rand()<probattk(1))*(perdata(1)+(rand()<probattk(2))*perdata(2 )); svv=srr*valdata(1)+(srr>perdata(1))*(srr- perdata(1))*(valdata(2)-valdata(1)); else srr=(rand()<probattk(1))*(perdata(1)+(rand()<probattk(2))*(perdata( 2)+(rand()<probattk(3))*(restime-fiwotime(3))*percentpermin)); svv=srr*valdata(1)+(srr>perdata(1))*(srr- perdata(1))*(valdata(2)-
  • 33. 33 valdata(1))+(srr>(perdata(1)+perdata(2)))*(srr-perdata(1)- perdata(2))*(valdata(3)-valdata(2)); end vlost(i)=svv*amoutdata; %backup of loss data in clients information %vlosta are divided into 100 units, 50% client 50% management client's infor with 80%back up veachlost(i)=vlost(i)/100; for j=1:100 vback(j)=(rand()<percentdata)*backupdata*veachlost(i); vlost(i)=vlost(i)-vback(j); end vnlost(ii)=vnlost(ii)+vlost(i); end end hist(vlost,1000); % plot of the results VAR=prctile(vlost,[25, 50, 75, 95, 99, 99.9]) % percentile selection of the convoluted distributions 3. Codes for Aggregated Scenario based on Matlab X1=sort(vnlost); X2=sort(sumtheQU); corr=[0 .3 .7 1]; % correlation output=[] for j=1:4 ROU=[1 corr(j);corr(j) 1]; % correlation matrix for i=1:N X=[X1(i) X2(i)]; XBOTH(i)=sqrt(X*ROU*X'); end VARboth=prctile(XBOTH,[25, 50, 75, 95, 99, 99.9]) plot([25, 50, 75, 95, 99, 99.9],VARboth) output=[output;VARboth] hold on, end output 4. K-mean cluster algorithm based on Matlab
  • 34. 34 Q=VARQU; %VAR n=X2; % LOSS PEC=[25 50 75 95 99 99.9]; % PERCENTAGE k=[0 0 0 0 0 0]; % LOCATION SUI1=[0 0 0 0 0 0]; % AMOUNT OF EACH GROUP SUM1=Q; SUM2=Q; %n=gamrnd(2,20000,10000,1); subplot(1,2,1) hist(n,1000); subplot(1,2,2); %plot([25, 50, 75, 95, 99, 99.9],SUM1,'-O'); while 1 SUM1=[0 0 0 0 0 0]; % grouping for j=1:10000 for i=1:6 k(i)=abs(SUM2(i)-n(j)); end m=min(k); [xx]=find(k==m); SUM1(xx)=SUM1(xx)+n(j); SUI1(xx)=SUI1(xx)+1; end % K-means K=6 SUL(1)=0; for i=1:6 SUM1(i)=SUM1(i)/SUI1(i); SUL(i+1)=SUL(i)+SUI1(i); end for i=1:6 SULL(i)=SUL(i+1); SSS(i)=n(SULL(i)); end %disp(SULL); %disp(SUM1); SUI1=[0 0 0 0 0 0]; % convergence condition
  • 35. 35 if max(abs(SUM1-SUM2)./SUM2)<=0.05 break; end SDASSDA=6; SUM2=SUM1; hold on, plot(SULL(1:SDASSDA)/100,SSS(1:SDASSDA)); end hhhh=[SULL;SSS;PEC*100;Q] hold on, plot(SULL(1:SDASSDA)/100,SSS(1:SDASSDA),'LineWidth',3); hold on, plot(PEC(1:SDASSDA),Q(1:SDASSDA),'-O');