Sep 29, 2011

Pension Gamification

For years you deny it, then you doubt it, then you know for sure:



This blog is specially written for (1) those who are still in the denial phase and (2) 'actuarial life gamers' who just want to enjoy actuarial gaming....

Pension Game
Games are an excellent way to involve people (employees) in a complex and (two fold)  'low interest product' like pension.


Pension games stimulate clear communication and understanding of pensions (The Nest Phrasebook:Clear communication about pensions Version 1.1).

Games, like the above pension game, conquer the world more and more.

Gamification
It looks like everything that has to be sold or communicated, succeeds better with the help of a game. Gamification gets people more engaged, helps change behaviors and stimulates innovation. In other words:

Gamification rules our life

As an example of gamification, Gartner cited the U.K.’s Department for Work and Pensions, which created an innovation game called Idea Street to decentralize innovation and generate ideas from its 120,000 people across the organization. Idea Street is a social collaboration platform with the addition of game mechanics, including points, leaderboards and a “buzz index.”

The employees went wild for it. Within 18 months, Idea Street had approximately 4,500 users and had generated 1,400 ideas, 63 of which had gone forward to implementation.

Other gamification examples are the U.S. military’s “America’s Army” video-game recruiting tool, and the World Bank-sponsored Evoke game, which crowdsources ideas from players globally to solve social challenges.

All and more of this in the 2011 report of  Gartner that states that by 2015, more than 50% of organizations tat manage innovation processes will gamify those processes.

Consequences
Mainly as a consequence of the overdose of gamification in our society, people get confused and lose sight on the difference between reality and illusion. 

This confusion is exacerbated by the fact that negative effects of the current financial crisis have been 'managed away' in stead of letting people and organizations 'perceive' and 'experience' the (negative) financial consequences of their handling.



The 'Hocus Pocus Society'
This way, we gradually created a 'Hocus Pocus Society' where all our (actuarial) models and convictions are doomed to fail as the 'game of life' seems to be to:
  • challenge the established (good governance) rules to raise profit and returns to an unrealistic level, by introducing uncontrolled and uncontrollable mechanisms and financial instruments like 'market value', 'derivatives', 'sub-prime mortgages', 'High Frequency Trading', etc.
  • try - at the same time - to capture and control these volatile 'unwanted' effects of these mechanisms and instruments by an overdose of hypocritical additional regulation (Solvency (II), Governance, etc.)
  • transfer and lay back fundamental complex risk to consumers and communicating this in such a (so called) 'transparent' but oversimplified 'way', that consumers for sure lose their trust in financial institutions as a whole.
  • end up in new, for the financial institutions, 99,9% risk free financial products and offerings on the marketplace with a non corresponding stock holders dividend level.



This illusory way of communication about pensions is well demonstrated in the next 'Pension game' video: The myth of your (401K) pension





Way out
To get out of this down spiral cycle in the fiancial industry, we'll have to learn from other industries.

Just like in the case of introducing new medicines , new financial products will have to meet a number of tests and need explicit approval by in and external regulators before they are allowed to be  introduced on the market place.

Anyhow: Don't end up like a 'Hocus Pocus Actuary' and game up your actuarial life!



Related and additional links:
- Idea Street
- Gartner: Over 50% firms may gamify processes 
- Youtube: The Pension Game
- The Annuity Game - Heads Government Wins Tails Pensioner's Lose
- 5 Cent "Old Age Pension" Dice Game 


Calculators:
- life expectancy calculator
- Retirement Withdrawal Calculator






Sep 26, 2011

Small Population Compliance Samples

My last post, Compliance Sample Size, demonstrated the set up of an efficient sample method for compliance tests in case of large populations.

What if population size is relatively small ?, some actuaries asked me....

In this case you can ( instead of the beta distribution) make use of the hypergeometric distribution for calculating confidence levels.

Here's the same example as I used in my blog 'Compliance Sample Size', but now for a population of 100 .


'Compliance Check' Example (N=100)
As you probably know, pension advisors have to be compliant and  meet strict federal, state and local regulations.

On behave of the employee, the sponsoring employer as well as the insurer or pension fund, all have a strong interest that the involved 'Pension Advisor' actually is, acts and remains compliant.

PensionAdvice
A professional local Pension Advisor firm, 'PensionAdvice' (fictitious name), wants 'compliance' to become a 'calling card' for  their company. Target is that 'compliance' will become a competitive advantage over its rivals.

You, as an actuary, are asked to advise on the issue of how to verify PensionAdvice's compliance....... What to do?

  • Step 1 : Compliance Definition
    First you ask the board of PensionAdvice  what compliance means.
    After several discussions compliance is in short defined as:

    1. Compliance Quality
      Meeting the regulator's (12 step)  legal compliance requirements
      ('Quality Advice Second Pillar Pension')

    2. Compliance Quantity
      A 100% compliance target of PensionAdvice's portfolio, with a 5% non-compliance rate (error rate) as a maximum on basis of a 95% confidence level.
    .
  • Step 2: Check on the prior believes of management
    On basis of earlier experiences, management estimates the actual NonCompliance rate at 8% with 90% confidence that the actual NonCompliance rate is 8% or less:

    If management would have no idea at all, or if you would not (like to) include management opinion, simply estimate both (NonCompliance rate and confidence) at 50% (= indifferent) in your model.

  • Step 3: Define Management Objectives
    After some discussion, management defines the (target) Maximum acceptable NonCompliance rate at 5% with a 95% confidence level (=CL).

  • Step 4: Define population size
    In this case it's simple. PensionAdvice management knows for sure the portfolio they want to check for compliance, consists of 100 files: N=100.

    This is how step 2 to 4 look in your spreadsheet...



  • Step 5 : Define Sample Size
    Now we get to the testing part....

    Before you start sampling, please notice how prior believes of management are rendered into a fictitious sample (test number = 0) in the model:
  • In this case prior believes match a fictitious sample of size 25 with zero noncompliance observations. 
  • This fictitious sample corresponds to a confidence level of 77% on basis of a maximum (population) noncompliance rate of 5%.
[ If you think the rendering is to optimistic, you can change the fictitious number of noncompliance observations from zero into 1, 2 or another number (examine in the spreadsheet what happens and play around).]


To lift the 77% confidence level to 95%, it would take an additional sample size of 20 - with zero noncompliance outcomes (you can check this in the spreadsheet).
As sampling is expensive, your employee Jos runs a first test (test 1) with a sample size of 10 with zero noncompliance outcomes. This looks promising!
The cumulative confidence level has risen from 76% to over 89%.


You decide to take another limited sample with a sample size of 10. Unfortunately this sample contains one noncompliant outcome. As a result, the cumulative confidence level drops to almost 75% and another sample of size 20 with zero noncompliant outcomes is necessary to reach the desired 95% confidence level.

You decide to go on and after a few other tests you finally arrive at the intended 95%cumulative confidence level. Mission succeeded!

Evaluation
The interesting aspects of this method are:

  1. Prior (weak or small) samples or beliefs about the true error rate and confidence levels, can be added in the model in the form of an (artificial) additional (pre)sample.

  2. As the sample size increases, it becomes clear whether  the defined confidence level will be met or not and if adding more samples is appropriate and/or cost effective.
This way unnecessary samples are avoided, sampling becomes as cost effective as possible and auditor and client can dynamically develop a grip on the distribution. Enough talk, let's demonstrate how this works.

Another great advantage of this incremental sampling method is that if noncompliance shows up in an early stage, you can
  • stop sampling, without having made major sampling cost
  • Improve compliance of the population by means of additional measures on basis of the learnings from the noncompliant outcomes
  • start sampling again (from the start) 

If - for example -  test 1 would have had 3 noncompliant outcomes instead of zero, it would take an additional test of size 57 with zero noncompliant outcomes tot achieve a 95% confidence level.  It's clear that in this case it's better to first learn from the 3 noncompliant outomes, what's wrong or needs improvement, than to go on with expensive sampling against your better judgment.


D. Conclusions
On basis of a prior believe that - with 90% confidence - the population is  8% noncompliant, we can now conclude that after an additional total sample of size 40, PensionAdvice's noncompliance rate is 5% or less with a 95% confidence level.

If we want to be 95% sure without 'prior believe', we'll have to take an additional sample of size 25 with zero noncompliant outcomes as a result.

E. Check out: DOWNLOAD EXCEL

You can download the next Excel spreadsheets to check the Demo or tot set up your own compliance test:

- Small population Compliance test DEMO
- Small population Compliance test BLANK
- Large population Compliance test

Enjoy!

Sep 25, 2011

Compliance: Sample Size

How to set an adequate sample size in case of a compliance check?

This simple question has ultimately a simple answer, but can become a "mer à boire" (nightmare) in case of a 'classic' sample size approach.....

In my last-but-one blog called 'Pisa or Actuarial Compliant?', I already stressed the importance of checking compliance in the actuarial work field.

Not only from a actuarial perspective compliance is important, but also from a core business viewpoint:

Compliance is the main key driver for sustainable business

Minimizing Total Cost by Compliance
A short illustration: We all know that compliance cost are a part of Quality Control Cost (QC Cost) and that the cost of NonCompliance (NC Cost) increase with the noncompliance rate. 

Mainly 'NC cost' relate to:
  • Penalties or administrative fines of the (legal) regulators
  • Extra  cost of complaint handling
  • Client claims
  • Extra administrative cost 
  • Cost of legal procedures

Sampling costs - on their turn -  are a (substantial) part of QC cost.

More in general now it's the art of  good practice compliance management, to determine that level of maximal noncompliance rate, that minimizes the total cost of a company.



Although this approach is more or less standard, in practice companies revenues depend strongly on the level of compliance. In other words: If compliance increases, revenues increase and variable costs decrease.

This implies that introducing 'cost driven compliance management' - in general - will (1) reduce  the total cost and (2) mostly make room for additional investments in 'QC Cost' to improve compliance and to lower variable and total cost.

In practice you'll probably have to calibrate (together with other QC investment costs) to find the optimal cost (investment) level that minimizes the total cost as a percentage of the revenues.


As is clear, modeling this kind of stuff is no work for amateurs. It's real risk management crafts-work. After all, the effect of cost investments is not sure and depends on all kind o probabilities and circumstances that need to be carefully modeled and calibrated.

From this more meta perspective view, let's descend to the next down to earth 'real life example'.

'Compliance Check' Example
As you probably know, pension advisors have to be compliant and  meet strict federal, state and local regulations.

On behave of the employee, the sponsoring employer as well as the insurer or pension fund, all have a strong interest that the involved 'Pension Advisor' actually is, acts and remains compliant.

PensionAdvice
A professional local Pension Advisor firm, 'PensionAdvice' (fictitious name), wants 'compliance' to become a 'calling card' for  their company. Target is that 'compliance' will become a competitive advantage over its rivals.

You, as an actuary, are asked to advise on the issue of how to verify PensionAdvice's compliance....... What to do?


  • Step 1 : Compliance Definition
    First you ask the board of PensionAdvice  what compliance means.
    After several discussions compliance is in short defined as:

    1. Compliance Quality
      Meeting the regulator's (12 step)  legal compliance requirements
      ('Quality Advice Second Pillar Pension')

    2. Compliance Quantity
      A 100% compliance target of PensionAdvice's portfolio, with a 5% non-compliance rate (error rate) as a maximum on basis of a 95% confidence level.

    The board has no idea about the (f)actual level of compliance. Compliance was- until now - not addressed on a more detailed employer dossier level.
    Therefore you decide to start with a simple sample approach.

  • Step 2 : Define Sample Size
    In order to define the right sample size, portfolio size is important.
    After a quick call PensionAdvice gives you a rough estimate of their portfolio: around 2.500 employer pension dossiers.

    You pick up your 'sample table spreadsheet' and are confronted with the first serious issue.
    An adequate sample (95% confidence level) would urge a minimum of 334 samples. With around 10-20 hours research per dossiers, the costs of this size of this sampling project would get way out of hand and become unacceptable as they would raise the total cost of  PensionAdvice (check this, before you conclude so!).

    Lowering confidence level doesn't solve the problem either. Sample sizes of 100 and more are still too costly and confidence levels of less than 95% are of no value in relation to the clients ambition (compliance= calling card).
    The same goes for higher - more than 5% - 'Error Tolerance' .....

    By the way, in case of samples for small populations things will not turn out better. To achieve relevant confidence levels (>95%) and error tolerances (<5%), samples must have a substantial size in relation to the population size.


    You can check all this out 'live', on the next spreadsheet to modify sampling conditions to your own needs. If you don't know the variability of the population, use a 'safe' variability of 50%. Click 'Sample Size II' for modeling the sample size of PensionAdvice.



  • Step 3: Use Bayesian Sample Model
    The above standard approach of sampling could deliver smaller samples if we would be sure of a low variability.

    Unfortunately we (often) do not know the variability upfront.

    Here comes the help of a method based on efficient sampling and Bayesian statistics, as clearly described by Matthew Leitch.

    A more simplified version of Leitch's approach is based on the Laplace's famous  'Rule of succession', a classic application of the beta distribution ( Technical explanation (click) ).

    The interesting aspects of this method are:
    1. Prior (weak or small) samples or beliefs about the true error rate and confidence levels, can be added in the model in the form of an (artificial) additional (pre)sample.

    2. As the sample size increases, it becomes clear whether  the defined confidence level will be met or not and if adding more samples is appropriate and/or cost effective.
  • This way unnecessary samples are avoided, sampling becomes as cost effective as possible and auditor and client can dynamically develop a grip on the distribution. Enough talk, let's demonstrate how this works.

Sample Demonstration
The next sample is contained in an Excel spreadsheet that you can download and that is presented in a simplified  spreadsheet at the end of this blog. You can modify this spreadsheet (on line !) to your own needs and use it for real life compliance sampling. Use it with care in case of small populations (n<100).

A. Check on the prior believes of management
Management estimates the actual NonCompliance rate at 8% with 90% confidence that the actual NonCompliance rate is 8% or less:



If management would have no idea at all, or if you would not (like to) include management opinion, simply estimate both (NonCompliance rate and confidence) at 50% (= indifferent) in your model.

B. Define Management Objectives
After some discussion, management defines the (target) Maximum acceptable NonCompliance rate at 5% with a 95% confidence level (=CL)



C. Start ampling
Before you start sampling, please notice how prior believes of management are rendered into a fictitious sample (test number = 0) in the model:
  • In this case prior believes match a fictitious sample of size 27 with zero noncompliance observations. 
  • This fictitious sample corresponds to a confidence level of 76% on basis of a maximum (population) noncompliance rate of 5%.
[ If you think the rendering is to optimistic, you can change the fictitious number of noncompliance observations from zero into 1, 2 or another number (examine in the spreadsheet what happens and play around).]

To lift the 76% confidence level to 95%, it would take an additional sample size of 31 with zero noncompliance outcomes (you can check this in the spreadsheet).
As sampling is expensive, your employee Jos runs a first test (test 1) with a sample size of 10 with zero noncompliance outcomes. This looks promising!
The cumulative confidence level has risen from 76% to over 85%.



You decide to take another limited sample with a sample size of 10. Unfortunately this sample contains one noncompliant outcome. As a result, the cumulative confidence level drops to almost 70% and another sample of size 45 with zero noncompliant outcomes is necessary to reach the desired 95% confidence level.

You decide to go on and after a few other tests you finally arrive at the intended 95%cumulative confidence level. Mission succeeded!



The great advantage of this incremental sampling method is that if noncompliance shows up in an early stage, you can
  • stop sampling, without having made major sampling cost
  • Improve compliance of the population by means of additional measures on basis of the learnings from the noncompliant outcomes
  • start sampling again (from the start) 

If - for example -  test 1 would have had 3 noncompliant outcomes instead of zero, it would take an additional test of size 115 with zero noncompliant outcomes tot achieve a 95% confidence level.  It's clear that in this case it's better to first learn from the 3 noncompliant outomes, what's wrong or needs improvement, than to go on with expensive sampling against your better judgment.



D. Conclusions
On basis of a prior believe that - with 90% confidence - the population is  8% noncompliant, we can now conclude that after an additional total sample of size 65, PensionAdvice's noncompliance rate is 5% or less with a 95% confidence level.

If we want to be 95% sure without 'prior believe', we'll have to take an additional sample of size 27 with zero noncompliant outcomes as a result.

E. Check out

Check out, download the next spreadsheet. Modify sampling conditions to your own needs and download the Excel spreadsheet.


Finally
Excuses for this much too long blog. I hope I've succeeded in keeping your attention....


Related links / Resources

I. Download official Maggid Excel spreadsheets:
- Dynamic Compliance Sampling (2011)
- Small Sample Size Calculator

II. Related links/ Sources:
- 'Efficient Sampling' spreadsheet by Matthew Leitch
- What Is The Right Sample Size For A Survey?
- Sample Size
- Epidemiology
- Probability of adverse events that have not yet occurred
- Progressive Sampling (Pdf)
- The True Cost of Compliance
- Bayesian modeling (ppt)

Sep 12, 2011

Pisa or Actuarial Compliant?

When we talk about actuarial compliance, we usually limit this to our strict actuarial work field.
In a broader sense as 'risk managers', we (actuaries) have a more general responsibility for the sustainability of the company we work for.

Compliance is not just about security, checks, controls, protection, preventing fraud, ethical behavior. Moreover  compliance is the basis of adequate risk management and delivering high standard service and products to your companies clients.

Pisa Compliant
No matter how brilliant and professional our calculations, if the data - on which these calculations are based on -  are 'limited', 'of insufficient quality' or 'too uncertain', we as actuaries will finally fail.

Therefore , building actuarial sandcastles is great art, however completely useless. Matthew 7:26 tells us :  it's a foolish man who builds his actuarial house on the sand....

And so, let's take a look if we have indeed become 'Pisa Compliant' by checking if our actuarial compliance is build on sand or on solid ground. In other words: let's check if actuarial compliance itself is compliant...nd.

Actuarial Data Governance
To open discussion, let's start with some challenging Data Governance questions:

  • Data quality compliance
    How is 'data quality compliance' integrated in your actuarial daily work? Have you addressed this issue? And if so, do you just rely on statements and reports of others (auditors, etc), can you agree upon the data quality standards (if there are any). In other words: are the data, processes and reports you base you calculations on, 100%  reliable and guaranteed? If not, what's the actual confidence level of your data en do you report about this confidence level to the board?

  • Data quality Conformation
    Have you checked your calculation data  set on bases of samples or second opinions?

    And if so, do you approve with the methods used, the confidence level and the outcome of the data audit? 

    Or do you just 'trust' on the blue eyes of the accountant or auditor and formally state you're "paper compliant"?

    Did you check if  client information, e.g. pension benefit statement, are not only in line with the administrative data, but also in line with insurance policy conditions or pension scheme rules?

  • Up to date, In good time
    To what quantitative level is the administrative data  'up to date' and is it transparent?

    Do you receive administrative backlog and delays reporting and tracking and if so, how do you translate these findings in your calculations?

  • Outsourcing
    From a risk management perspective, have you formulated quantitative and qualitative demands (standards) in outsourced contracts, like 'asset management', 'underwriting'  and 'administration' contracts?

    Do you agree on these contracts, do 'outsourcing partners' report on these standards and do you check these reports regularly on a detail level (samples)? 

And some more questions you have to deal with as an actuary:
  • Distribution Compliance
    Is the intermediary and are the employers and customers your company deals with, compliant? What's the confidence level of this compliance and in  case of partially noncompliance, what could be the financial consequences? (Claims)

  • Communication Compliance
    Is communication with employees, customers, regulators, supervisors and shareholders compliant? Has your board (and you!) defined what compliance actually means in quantitative terms?

    Is 'communication compliance' based on information (delivery and check) or on communication?

    In this case, have you've also checked if  (e.g.) customers understood what you tried to tell them?

    Not by asking if your message was understood, but by quantitative methods (tests, polls, surveys, etc) that undisputed 'prove' the customer really understood the message.

    Effective Communication Practice
    Never ask if someone has understood what you've said or explained. Never take for granted someone tells you he or she 'got the picture'.

    Instead act as follows: At the end of every (board) presentation, ask that final and unique question of which the answer  assures you, your audience has really understood what your tried to bring across.

Checking Compliance
Now we get to the quantitative 'hard part' of compliance:

How to check compliance?

This interesting topic will be considered in my next blog.... ;-)

To lift a little corner of the veil, just a short practical tip to conclude this blog:

Compliance Sample Test
From a large portfolio you've taken a sample of 30 dossiers to check on data quality. All of them are found compliant. What's the upper limit of the noncompliance rate in case of a 95% confidence level?

This type of question is a typical case of:

“If nothing goes wrong, is everything alright?”

Answer.
The upper limit can be roughly estimated by a simple rule of thumb, called 'Rule of three'....



'Rule of three for compliance tests'
If no noncompliant events occurred in a compliance test sample of n cases, one may conclude with 95% confidence that the rate of  noncompliance will be less than  3/n.

In this case one can be roughly 95% sure the noncompliance rate is less than 10% (= 3/30). Interesting, but slightly disappointing, as we want to chase noncompliance rates in the order of 1%.

Working backwards on the rule of three, a 1% noncompliance rate would urge for samples of 300 or more. Despite the fact that research for 46 international organizations showed that on average, noncompliance cost is 2.65 times the cost of compliance, this size of samples is often (perceived as) too cost inefficient and not practicable.

Read my next blog to find out how to solve this issue....

Related Links:
- Actuarial Compliance Guidelines
- What Is The Right Sample Size For A Survey?
- Epidemiology
- Probability of adverse events that have not yet occurred
- The True Cost of Compliance (2011)
- 'Rule of three'
- Compliance testing: Sampling Plans (accounting standards) or Worddoc

Sep 9, 2011

Humor: Merkozy, It's too late

Comparing 5 year exchange rates USD/EUR (decline : 17%) and USD/CNY (decline 21%) clearly shows the negative outlook of the US Dollar.

Both U.S. and Europe, are facing severe debt problems they can not solve with more debt.

Desperate actions
President Obama tries to stimulate economy by creating 1.5 mln new jobs with a $ 450 billion investment (American Job Act).

In Europe president Merkel (Germany) and Sarkozy (France) have joint their strengths and totally different characters into 'one personality', to create not only a strong financial but also economic European union.

This new economic union is necessary to establish a firm grip on the measures that weak financial European countries like, Greece, Italy, Ireland, Spain and Portugal have to take to recover from their debt.

It's too late
Unfortunately there's no support for such an initiative. Unluckily, no Merkozy will be able to prevent Europe from a financial meltdown.
The  only way out seems a European split in relative strong and weak countries.

Make up your mind on the geographic spread of the assets of your company. Get out before it's too late....



Sep 7, 2011

Irrational Risk

Actuarial work is demanding..., so you're arriving late at your hotel that night. The hotel manager has only two rooms left. These two rooms are exactly the same, except for one aspect: The fire alarm.....


The manager tells you that in the event of a nighttime fire due to the usual causes, guests in Room 1, equipped with Alarm 1, have an actuarial calculated  2% chance of dying. Guests in Room 2, equipped with Alarm 2, have only a 1% chance of dying.

However - things in life are always complicated -  there's a slight problem.....

According to the manager...... The wiring of Alarm 2 is such that it sometimes causes electrical fires that increase the risk of dying in a nighttime fire by an additional 0.01%.

In other words, Alarm 1 is associated with a 2% risk of death and Alarm 2 is associated with a 1% + 0.01% (betrayal) risk of death.

What room do you choose as a professional actuary?

Outcome
According to a study by Gershoff and Koehler, most participants choose the room with Alarm 1. This,  even though this room 1 has double the increased risk of fire death, according the researchers. Reason: most participants found the tiny risk of "betrayal" (product malfunction) much more frightening than the much larger risk of actually dying.  When people get upset by a tiny risk, they often paradoxically choose the much larger risk.

Personally I think a more imaginable risk 'weighs' stronger than a non-specific abstract risk and in general people are unaware of conditional probability effects......

Conclusion
This simple example proofs that emotion has a strong influence on risk decisions.

Just like in our actuarial profession, risk decisions are often irrational.

It is our duty as actuaries to demystify and to rationalize risk. However, sometimes we're victim of the same emotional bias....




Read more about this interesting subject on:

- Vaccination and betrayal aversion (2011)
- Safety First? The Role of Emotion in Safety Product Betrayal Aversion (2011)

Aug 7, 2011

U.S. Debt Autopsy

Coming back from vacation, the world seems lost. You don't need to be an actuary to grasp that the recent decision to lift the U.S. debt ceiling is first class trickery and completely inadequate.

What the Chinese rating agency Dagong already concluded back  in November 2010, is only now (August 5, 2011) reluctantly and partly followed by S&P:

U.S. AAA status = Dead.

It's interesting to see which countries Dagong rates lower than  the three famous rating Agencies in the U.S.  (download complete Dragon report).

Meanwhile Dagong downgraded the U.S. again to an ordinary A-status on august 2, 2011.

The arguments for country degrades (as the U.S.) are as much clear as simple: if lifting debt ceilings is not at the same time combined with serious debt reduction measures (spending cuts), you go DOWN!

The outlook on the U.S. is still negative.


Outlook
Let's take look at what happened during 2011 and what 2012 will bring..


This chart instantaneously makes clear what's happening:

  • Jan 2011 -  half May 2011
    Although  the whole world can figure out that the original debt ceiling of  $14.294 trillion will be reached within a few months, no measures or actions are taken by the U.S. Treasury to prevent a debt default,.

  • May 16, 2011 - August 1,  2011
    Treasury Secretary Timothy Geithner informs Congress he will start tapping into federal pension funds on Monday to free up borrowing capacity as the nation hits the $14.294 trillion legal limit on its debt.

    By these and (possible) other optical actions the actual debt is kept artificially stable, slightly above the first ceiling. Of course the factual debt will (non reporting or visible) continue to keep growing.

  • August 1&2, 2011
    The U.S. House of Representatives and the U.S. Senate pass the Budget Control Act on Aug 2, 2011.The debt ceiling is immediately raised by $400 billion, to $14.694 trillion.

    A second debt ceiling increase allows the current new ceiling to grow by an additional $500 billion, to $15.194 trillion, so that government can pay its bills until the end of February 2012.  However, Congress has the authority to reject this second increase.

  • August 5, 2011 - March 2012
    TresuryDirect reports show the debt catch-up effects on August 5, 2011.

    Already $271 billion of the $400 billion debt ceiling lift turns out to be 'consumed'. Another $129 billion is left.

    As my two year old son can calculate: If no measures will be effective, around mid September 2011 a new ceiling crisis and media lift-show shall start.

    After agreeing in  September to the second ceiling of  $15.194 trillion the muppet debt show will start again in March 2012, when the second ceiling will break.

The party is over
I'm not a pessimistic person by nature, but the U.S. is running out of possible solutions. 

It looks like the financial space flight program is over. 

We'll have to build a society on new ethical financial principles.

If real measures stay out and claims on other countries or banks (as was the case with the sub prime debacle) are limited, the U.S. will unfortunately default in the end.

This U.S. default will take along most western countries.

It will result in a worldwide financial meltdown.

They only way out that seems left is:

Inflation


Let's hope for the best or a miracle. God bless America!



Related Links and Resources:
- Spreadsheet (Excel) with 2011 Debt Data
- S&P Report, August 5, 2011
- TreasuryDirect (U.S. Debt development)
- Debt Ceiling Increase of 2011
- Alert - Just So We Don't Get Confused As To The Source Of Our Little Problem

Jul 6, 2011

Humor: Actuarial Mind


In July 2011 holidays  - instead of blogs - are ahead...

Just chew this month on the next actuary no-brainer:

The smartest actuary in the world
The Pope, a well seasoned actuary and a student nurse are flying on an airplane. The captain comes back and says that he has some bad news and some really bad news. The bad news is that the plane is going to crash! As he puts on a parachute and jumps out he says that the really bad news is that there are only 2 more parachutes.

The actuary says: “I am the smartest man in the world. I've just calculated my life expectancy to be more than fifteen years. Excuse me...” With that he puts on a parachute and jumps out.

The Pope says: “Well, my child, I would love to live, but I believe that my time is up. Please take the other parachute and save yourself.”

The student nurse says: “Not to worry Holy Father. Right now the smartest man in the world is trying to find the rip-cord on my back pack!”

Jun 27, 2011

Impact or Probability?

We all are more than familiar with the definition of Risk:

Risk = Probability × Impact= P×I

This way of measuring risk is a nice, simple, explainable and intuitive way of ordering risks in board or bath rooms, but unfortunately quite useless.

To demonstrate the limits of this kind of typical Risk definition, let's take a look at the next story:

The Risk of bicycling

You decided to start a 3 year math study at City University in London. From your brand new apartment in Southall, it's a 12.5 mile drive to the University at Southhampton Street.  As a passionate cyclist you consider the risk of cycling through London for the next three years.

Based on your googled " DFT's Reported Road Casualties 2009" research (resulting in a cycling death rate of 36 per billion vehicle miles), you first conclude that the probability of getting killed in a cycle accident during your three year study is relatively low : 0.1% (≈ 3[years] × 365[days] × 25[miles] × (36 [Killed]  ÷ 109[vehicle miles]).

Subjective probability
After this factfinding you start to realize it's YOU getting on the bike and it's YOUR 0.1% risk of DYING  in the next three years of your study....

Hmmmm...this comes closer; it makes things a little different, doesn't it? 

Its looks like 'subjective probability' - on reflection - is perhaps somewhat different from 'objective probability'.

While your left and right brain are still in a dormant paradoxical state of confusion, your left (logical) brain already starts to cope with the needs of the right (emotional) half that wants you on that bike at all costs!

Russian Roulette
Now your left brain tells you not to get emotional, after all it is 'only' an additional 0.1% risk. Already your left brain starts searching for reference material to legitimate the decision you're about to take.

Aha!.... Let's compare it with 'Russian Roulette', your left brain suggests. Instead of 6 chambers we have thousand chambers with one bullet. Heeee, that makes sense, you talk to yourself.

With such a 1000 chambers Russian gun against my head I would pull the trigger  without hesitating....  Or wouldn't I?..... No.., to be completely honest, 'I wouldn't risk it', my right brain tells me.

Hé... my left brain now tells me my right brain is inconsistent: It wants me on the bike but not to take part in a equal 'death probability game' of Russian roulette. Why not?

In Control
My left half concludes it must be the 'feeling' of my right side that makes me feel I'm 'in control' on my bike, but not in case of Russian Roulette. That makes sense, tells my left brain me. Of course! Problem solved! My right and left brain finally agree: It's only a small risk and it's I who can control the outcome of a healthy drive.  Besides, this way the health benefits of cycling massively outweigh the risks as well, my right brain convinces me superfluous.


A final check by my right brain tells me: If I can't trust myself, who can I?
This rhetorical question is the smashing argument in stepping on the bike and to enjoy a wonderful ride through London City.
As ever...,


Aftermathematics
After returning from my accidentless bike trip, I enjoy a drink with a colleague of mine, the  famous actuary Will Strike  [who doesn't know him? ;-)].


After telling him my 'bike decision story' he friendly criticizes me for my non-professional approach in this private decision problem. Will tells me that I should not only have analyzed the probability (P), but also the Impact (I) of my decision. Remember the equation: Risk=P×I?

Yes of course, Will is right. How could I forget? ..., the probability of getting a deathly accident was only 0.1%.

Yet, 'when' a car hits you full, the probability of meeting St. Petrus at heaven's gate is 100% and the Impact (I) is maximal (I=1; you're dead ...)

Summarized:

Risk[death on bike;25 miles/day; 3 years] =
Probability × Impact = 0.1% × 1=0.1%
From this outcome it's clear that, even though the Impact is maximal (1=100%) , on a '0% to 100% Risk scale' this 3 year 'London-Bike Risk Project' seems negligible and by no means a risk that would urge my full attention.

I'm finally relieved... it always makes a case stronger to have a taken decision verified by another method. In this case the Risk=P×I method confirmed my decision taken on basis of my left-right brain discussion.  Pff....

Afteraftermath
The next morning, after my subconscious brain washed the 'bike dishes' of the day before, I wake up with new insights. Suddenly I realize I tried to take my biking decision on the wrong variable: Probability, instead of Impact.

Actually, in both cases and without realizing, I took my decision finally on basis of the Impact and the possible 'Preventional Control' (not damage control !!!) I  could exert before and during my bike trip.

I had to conclude that in cases of high Impact (I>0.9), nor my left-right brain chat, nor the 'Risk=PxI' formula lead to a sound decision, because both are too much based on probability instead of Impact. In other words:

In case of high Impact, probability is irrelevant


In case of high Impact, only control counts


From now on this 'bike conclusion' will be engraved in my memory and I will apply it in my professional work as well.



P.S. for disbelievers, the tough ones!
If you're convinced you would take the risk of firing the 1000 chamber  Russian gun against your head, you probably valuate the fun of the bicycle trip higher than probability of the loss of your life or good health.

In this case, suppose someone would offer you an amount of money if you would take part in a 1000 chamber Russian roulette instead of a bicycle tour. At which amount would you settle?

Let's assume you would settle at € 10.000.000 (I wouldn't settle for less). In this case you really value your bicycle trip!!!! 

As we've seen in banking business as well : extreme low probabilities and high impact situations are tricky! That's why stress tests focus on impact and not on probability.

The different faces of Risk
Another issue when looking at risk is that risk is always conditional.
'The' probability of death or 'the' mortality rate doesn't exist. Mortality depends on a number of variables, such as age (the run down state of your DNA quality), the DNA-quality you where born with and lifestyle. Secondly mortality also depends on a number of uncertain events in your life.

To demonstrate this 'Chameleon property' of probability, lets take a look at the probability of a meteor hitting good old earth.

The initial probability of an asteroid devastating the earth within a 10 year time frame is around 0.1%. A typical case of low probability and high impact. Once we've become aware of a spotted meteor in our direction, the probability suddenly changes from a general probability in a time frame to a asteroid specific probability during his actual passage of the earth.
In case of  the asteroid '2011 MD'  that will pass the earth at a minimal distance of 11000 km on June 27, 2011, this specific probability turns out 0.11% (remember the Russian Gun...).

With a diameter of around 8 meter, this asteroid is no big threat to our civilization.

Here's a short impression what's coming flying in on us within the next decades (Source: Nasa; asteroid>50 meter or minimum distance< 100,000km):



Apart from some 'big asteroids' in the next decade, this picture puts our minds at rest. Yet we should keep in mind that most asteroids are discovered only a few weeks before a possibe collaps...


Risk Maps
A nice example of the limits of the Risk=P×I model in combination with a nice aleternative, is demonstrated by Fanton and Neil in in a document called: 'Measuring your Risks: Numbers that would make sense to Bruce Willis and his crew'.

In  their document they analyze the case of the film Armageddon, where an asteroid of the size of Texas is on a direct collision course with the earth and  Harry Stamper (alias Bruce  Willis) saves the world by blowing it up.

Trying to fill in the Risk=P×I model in this Armageddon case is useless.

In this case, Risk is defined as:

Risk =  [Probability of Impact]  × [Impact of asteroid striking the earth]
 
Fanton and Neil conclude:
  • We cannot get the Probability number.
    The probability number is a mix up. In general it makes no sense and it's too difficult for a risk manager to give the unconditional probability of every ‘risk’ irrespective of relevant controls, triggers and mitigants.
  • We  cannot  get  the  Impact  number. 
    Impact (on what?) can't be unconditional defined without considering also the possible mitigating events. 
  • Risk  score  is  meaningless.
    Even  if  we  could  get  round  the  two problems above, what exactly  does  the  resulting  number  mean?  
  • It  does  not  tell  us  what  we  really  need  to  know. 
    What  we  really  need  to  know is the probability, given our current state of knowledge, that there will be massive loss of life.

Instead of the Risk=P×I model,  Fanton and Neil propose (Measuring risks) the use of  causal models (risk maps) in which a risk is characterised by a set of uncertain events.

Each of these events has a set of  outcomes and the  ‘uncertainty’  associated  with  a  risk  is  not  a  separate  notion  (as  assumed  in  the  classic approach).
Every event  (and  hence  every  object  associated  with  risk)  has  uncertainty  that  is characterised by the event’s probability distribution.

Examples:

The Initial risk of meteor strike
The probability of loss of life (meaning at least 80% of the world population) is about 77%:



In terms of the difference that Bruce Willis and his crew could make there are two scenarios: (1) the meteor is blown up and (2) where it is not.




Reading off the values for the probability of “loss of life” being false we find that we jump from 8.5% (meteor not blown up) to 82% (meteor blown up). This near tenfold increase in the probability of saving the world clearly explains why it merited an attempt.

Lessons learned
Use (Bayesian) Risk Maps rather than the Probability Impact Model or Risk Heat Maps, if you want to take decisions on facts instead of your intuition.

P.S. Many thanks to Benedict Escoto, who attended me on a wrong interpretation of the bicycle risk on bases of the Biomed info.
See document: Deaths of Cyclists in London: trends from 1992 to 2006
I rewrote this blog on information of DFT.

Related Links:
- DFT's Reported Road Casualties 2009
- Pedal cyclist casualties in reported road accidents: 2008 
- Is Cycling Dangerous?
- Cycling in London – How dangerous is it? (2011)
- Nasa: Small Asteroid to Whip Past Earth on June 27, 2011
- Nasa: Close (future) asteroid approaches...
- Nasa: Differences between Asteroid, Comet, Meteoroid, etc.
- Nasa: Search asteroid approaches in data base
- Nasa: Impact Probability of asteroids 
- Fanton & Neil: Measuring risks
- Fanton & Neil: Bayesian networks explained (pdf)
- Neil: Using Risk Maps!

Adds:
Using Risk Maps


Deathly bike accedents in London




Jun 13, 2011

Actuary Garfield

There's not a lot of 'Actuary Humor' on the Internet. Here's one...

Actuary Garfield explains how actuaries think...


Great and lots of humor, those Garfield cartoon strips, (especially those about actuaries....).

Original Sources:
- Garfield Snow
- Garfield Snowman