Credit Risk Management Dr. Lukas Prorokowski Credit Risk Management About me: Dr Lukas Prorokowski Deputy Head of Validation at Banque International a Luxembourg Lukas.Prorokowski@gmail.com Lukas Prorokowski About the unit: This unit will be divided into the following blocks: • Theory • Practitioner’s Insights • Case Studies • Practical Exercises (excel) Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested (with case studies)? Credit Risk Management • What is Credit Risk management? • It is a continuous process of: • Identifying risk • Analysing risk • Modelling risk • Mitigating risk • Monitoring risk • Predicting risk Identify Analyse Quantify Mitigate Monitor Credit Risk Management • Mitigation of Credit Risk In the banking practice, credit risk mitigation constitutes an important element of risk management complementing the core credit risk modelling and estimation, counterparty credit quality checks and setting the risk appetite. There are several credit risk mitigation techniques among which applying the collateral haircuts is most prominent across the banking industry: • Holding collateral as security against granted credit lines; • Transferring risk to special purpose vehicles; • Implementing credit limits; • Netting and collateral arranging for exposures from derivative instruments. Credit Risk Management • Mitigation of Credit Risk What is the most common collateral accepted by banks? • Cash & Gold • Bonds (fixed income instruments) • Funds • Equities Three Lines of Defence Model • Three Lines of Defence Credit Risk Management is conducted through the 3 Lines of Defence model. All lines are coordinating tasks to set up the governance framework. • Credit Risk Modelling Team • Credit Risk Data Management Team • Model Implementation Team • Model Risk Manager (Credit Risk Unit) First Line of Defence • Model Validation (Independent Model Validation Unit) Second Line of Defence • Internal Audit Third Line of Defence CRO Three Lines of Defence Model • First Line of Defence • Credit Risk Modellers (Credit Risk Modelling Team): • are responsible for all model development activities comprising methodology, design and prototype construction. They provide the documentation throughout the development and keep it updated with changes in the model. • test the credit risk model, check its accuracy and identify its limitations → inform the Model User on the model’s usage limits. • Manage, with the support of the Model Implementation Team, the model implementation → define the specifications, review the planning and the implementation and oversee the model testing. • Keeping the Model Risk Manager (Credit Risk Unit) informed about the evolution of the credit risk model (e.g. recalibration, redevelopment, change in scope). Model Documentation Modelling Team Review Model Documentation Validation Team Three Lines of Defence Model • First Line of Defence • Credit Risk Data Management Teams: • The input data is one component of a credit risk model, and the Data Management Team is responsible for the activities related to the data quality. With the support of the Credit Risk Modelling Team, which identifies the required data for the model, the Management Team is responsible for the assessment of data completeness and appropriateness with the model design. • Manage, in close collaboration with the Model Implementation Team, the implementation of the data including: the source, collection, storage, validity and traceability of the data. • Keep the Senior Management (CRO) informed about data quality and improvement areas. • Monitor the Data Quality and tak the appropriate actions to maintain a high level of data quality. Data Quality Assurance Data Team Review Model Data Validation Team Three Lines of Defence Model • First Line of Defence • Model Implementation Teams: • The Model Implementer is responsible for ensuring that the correct implementation takes place, whilst the Model Developer (Credit Risk Modelling Team) remains accountable for the end-results. • Model implementation is usually managed on a day-to-day basis by IT based on the Model Developer’s requirements. For models which do not require IT, the role of Model Implementer can be assumed by the Model Developer. • Manage the dataflow between all involved systems. Implement the model data in accordance with the Model Data Manager’s (Credit Risk Data Management Team) specifications. • Ensure that the access rights to the model implementation and data source are securely managed. Grant access to systems Implementation Team Review Credit Rating Systems Validation Team Three Lines of Defence Model • First Line of Defence • Model User: • Responsible to operate the model in a manner consistent with the approved model documentation. • Assists the Model Developer in defining the purpose of the model • Informs the Credit Risk Modelling Team of exceptions and overrides. • Keeps the Senior Manager (CRO) informed about the quality and deficiencies of the model output. • Reports unexpected model behaviour or deficiencies to the Credit Risk Modelling Team. • Provides and maintains up-to-date procedures for model usage. Overrides & Feedback Model User Overrides & Feedback Modelling Team Analyse Overrides Validation Team Three Lines of Defence Model • First Line of Defence • Model Risk Manager (Credit Risk Unit – CRU): • The Model Risk Manager is responsible for establishing credit risk management standards. • Oversees the compliance with the Model Risk Management policies and procedures. • Maintains the model inventory tool. • Provides information to the Senior Management (CRO) about significant source of model risk, model development flaws and implementation failures. • Manages the model’s versioning, especially ensuring that the correct version of the model is properly disseminated. • Communicates with other team of the First and Second Line of Defence. Latest version of model document CRU Review Model Document Validation Team Obligation 16 Obligation 16: The Supervised Entity shall ensure that the process for drafting the model development report is transparent and valid. If changes to the model development report are necessary after the deliberations and model approval at IRSPC, the correct version of the report shall be re-presented to IRSPC and RPC. The IRSPC and/or RPC shall explicitly require a re-presentation of the report at the committee and include an action point in the minutes. Deadline 31.12.17 Status R Response: No Versioning of the Modelling Documentation 0.X • Draft in development 1.X • Final Draft • Out for comment 2.X • Under review 3.X • Out for Risk Policy Committee (RPC) approval 4.X • Post PRC/MRC Three Lines of Defence Model • First Line of Defence • Model Owner: • The Model Owner is responsible for many aspects of the model lifecycle and ensures that the model is properly developed, implemented, documented and used. • Ensures that the credit risk model has undergone appropriate validation and approval processes, prompt identification of model change. • Model Owners should be sufficiently senior within the Bank to be able to interact with the other actors concerned throughout the model lifecycle. • The role of Model Owner will be usually assumed by the Model Risk Manager. Model Owner Model Risk Manager Three Lines of Defence Model • Second Line of Defence • Internal Model Validation (Independent Model Validation Unit) • As a second line of defence, Internal Model Validation is responsible for independently verifying that the model proposed for use by the Credit Risk Modelling Team is fit for its intended purpose. • Internal Model Validation has explicit authority and independence to challenge related stakeholders and to present issues and highlight deficiencies. • Internal Model Validation is responsible for validating the key aspects of models: Model Design Data Quality Model Implementation Model Performance Model Validation Governance Process Three Lines of Defence Model • Third Line of Defence • Internal Audit • Internal Audit is the third line of defence, and is independent from any other functions. • its role is not to duplicate model risk management activities but rather to ensure that the first two lines of defence are operating effectively. • Reviews model validation reports and confirming their compliance with the Validation Policy. • Confirms that the model governance is compliant with the Bank’s expectations and the applicable regulatory requirements (especially the independence of the validation function). • Reviews annually the framework for Pillar I and IFRS9 models and their related parameters (including compliance with all applicable requirements). Validation Report Validation Team Review Validation Report Internal Audit Three Lines of Defence Model • Committees • Several Committees are established to consolidate the Model Risk Governance and to provide adequate follow-up and decisions: • Approves the Model Risk Governance Framework Board of Directors • Oversees Model Risk and Model Performance Board Risk Committee • Ensures the comprehensiveness and the consistency of the policies and procedures related to credit risk model concernsRisk Policy Committee • Oversees the lifecycle of each model, approves validation reports and performance reportsModel Risk Committee • Makes the final decision during the MRC meeting in case of a disagreementEscalation Committee • Monitors the use of credit risk model and reviews the rating process Rating Committee Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested (with case studies)? Regulatory Framework for Credit Risk • Basel II: • credit risk measurement approach: Standardised • Use of regulatory prescribed weights for exposure classes; • Over-reliance on agency ratings; • Some flexibility on the choice of risk weight assignment. FIRB (Foundation) • Use of own models for Probability of Default (PD) – subject to the regulatory approval; • Use of regulatory prescribed LGD models (e.g. regulatory floors). AIRB (advanced) • Use of own models for: • PD • LGD • EAD • Credit risk models are subject to regulatory review. MODEL COMPLEXITY Standardised Approach The Basel Committee allows for two approaches of calculating sovereign risk weights to be made: • In the first case, banks can rely on external ratings mapped to specific risk weights. • In the second case, banks can use scores assigned to sovereign exposures by Export Credit Agencies (ECA). However, the methodology of calculating the ECA’s scores must be aligned to the OECD-agreed principles and approved by the supervisor. • Against this backdrop, only a limited number of sovereigns are assigned the ECA’s scores. On the other hand, there are sovereign exposures that lack external ratings. Reliance on External Agency Ratings: Rating: AAA to AA- A+ to A- BBB+ to BBB- BB+ to B- Below B- Fallback Risk Weight: 0% 20% 50% 100% 150% 100% Reliance on ECA’s Scores: Score: 0 to 1 2 3 4 to 6 7 Risk Weight: 0% 20% 50% 100% 150% Standardised Approach Scenario Script Description Scenario 1 S1 Bank relies only on the agency ratings when calculating risk weights Scenario 2 S2 Bank relies only on the ECA’s scores when calculating risk weights Scenario 3 S3 Bank toggles between the agency ratings and ECA’s scores to achieve the lowest risk weights (liberal approach) Scenario 4 S4 Bank toggles between the agency ratings and ECA’s scores to achieve the highest risk weight (conservative approach) Scenario 5 S5 Smoothed approach proposed in this paper Aggregated results of the capital charges achievable under different scenarios: S1 – use of agency ratings; S2 – use of ECA’s scores; S3- liberal approach; S4- conservative approach; S5- smoothed approach. The currency used in this simulation is EURO. The original exposure to the entire data sample is EUR 137,000. 137 000 119 700 145 600 116 400 148 900 132 650 0 20 000 40 000 60 000 80 000 100 000 120 000 140 000 160 000 Original Exposure Scenario 1 Scenario 2 Scenario 3 Scenario 4 Scenario 5 Risk Weighted Capital (EUR) Regulatory Framework for Credit Risk • Basel II: • Our focus: • Apart from Basel II, what are other regulations that encompass credit risk? AIRB (advanced) • Use of own models for: • PD • LGD • EAD • Credit risk models are subject to regulatory review. Regulatory Framework for Credit Risk CRR CRD IV TRIM New Definition of Default EBA GL AnaCredit IFRS 9 BCBS 239 EBA’s Guidelines on PD estimation, LGD estimation and the treatment of defaulted exposures Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Exposure Assignment Art. 173(1)(a) of Reg. (EU) 575/2013 For exposures to corporates, where an institution uses the PD/LGD approach, assignment of exposures and periodic reviews of assignments shall be completed or approved by an independent party that does not directly benefit from decisions to extend the credit. R • Check if the Validation Report goes through the relevant Committee (in this case: MRC) for the final approval. • Check the composition of the Committee is made in a way that does not directly benefit from decisions to extend the credit. Art. 173(1)(c) of Reg. (EU) 575/2013 The institution shall have an effective process to obtain and update relevant information on obligor characteristics that affect PDs. R • Check if the relevant databases feed correctly to the model and are maintained by the Model Implementation Team and supervised by the Credit Risk Data Management Team. Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Use of Model Art. 174(a) of Reg. (EU) 575/2013 The model shall have good predictive power and capital requirements shall not be distorted as a result of its use. The input variables shall form a reasonable and effective basis for the resulting predictions. The model shall not have material biases. £ How can you prove that a credit risk model has good predictive power? Art. 174(b) of Reg. (EU) 575/2013 The institution shall have in place a process for vetting data inputs into the model, which includes an assessment of the accuracy, completeness and appropriateness of the data. £ Who provides data quality assurance? Who validates the data quality report? Art. 174(c) of Reg. (EU) 575/2013 The data used to build the model shall be representative of the population of the institution's actual obligors or exposures. £ Who checks data representativeness, data completeness and data accuracy? Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Use of Model Art. 174(a) of Reg. (EU) 575/2013 The model shall have good predictive power and capital requirements shall not be distorted as a result of its use. The input variables shall form a reasonable and effective basis for the resulting predictions. The model shall not have material biases. R How can you prove that a credit risk model has good predictive power? • Compare realised vs. predicted • Calibration backtesting • Override analysis Art. 174(b) of Reg. (EU) 575/2013 The institution shall have in place a process for vetting data inputs into the model, which includes an assessment of the accuracy, completeness and appropriateness of the data. R Who provides data quality assurance? • Data Management Team Who validates the data quality report? • Validation Team Art. 174(c) of Reg. (EU) 575/2013 The data used to build the model shall be representative of the population of the institution's actual obligors or exposures. R Who checks data representativeness, data completeness and data accuracy? • Data Management Team • Validation Team Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Documentation of Rating System Art. 175(2) of Reg. (EU) 575/2013 The institution shall document the rationale for and analysis supporting its choice of rating criteria. £ Where do you look for the documentation of the rationale for the methodological choices of the credit risk model? Art. 175(4)(b) of Reg. (EU) 575/2013 Where the institution employs statistical models in the rating process, the institution shall document their methodologies. This material shall establish a rigorous statistical process including out-of-time and out-ofsample performance tests for validating the model. £ The out-of-time (OOT) sample is the sample that includes additional years of observations to the portfolio that was used for model development. Why model validation should be performed on OOT sample? Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Documentation of Rating System Art. 175(2) of Reg. (EU) 575/2013 The institution shall document the rationale for and analysis supporting its choice of rating criteria. R Where do you look for the documentation of the rationale for the methodological choices of the credit risk model? Model Development Document Art. 175(4)(b) of Reg. (EU) 575/2013 Where the institution employs statistical models in the rating process, the institution shall document their methodologies. This material shall establish a rigorous statistical process including out-of-time and out-ofsample performance tests for validating the model. R The out-of-time (OOT) sample is the sample that includes additional years of observations to the portfolio that was used for model development. Why model validation should be performed on OOT sample? To capture changes to the portfolio of obligors Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Data Maintenance Art. 176(1)(a) of Reg. (EU) 575/2013 For exposures to corporates, where an institution uses the PD/LGD approach set out in Article 155(3), institutions shall collect and store: complete rating histories on obligors and recognised guarantors. £ Where do you store complete ratings? Rating System Database maintained by the Model Implementation Team. What are the common issues with storing complete rating histories? A common issue is that a database is producing missing values for the rating and indicating that an entity in non-rated, where in fact the entity has a final rating. Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Overall Requirements for Estimation Art. 179(1)(a) of Reg. (EU) 575/2013 In quantifying the risk parameters to be associated with rating grades or pools, institutions shall apply the following requirements: an institution's own estimates of the risk parameters PD, LGD, conversion factor and EL shall incorporate all relevant data, information and methods. The estimates shall be derived using both historical experience and empirical evidence, and not based purely on judgemental considerations. The estimates shall be plausible and intuitive and shall be based on the material drivers of the respective risk parameters. The less data an institution has, the more conservative it shall be in its estimation. R The Validation Team should confirm that: • A rating is obtained via a statistical and quantitative part, and an expert, qualitative/judgmental part. • There is enough historical data containing default events to make meaningful conclusions. • The model is reactive to macro-conditions and stressed risk parameters. • The model discriminates well between good and bad obligors. Regulatory Framework for Credit Risk • CRR: Regulation Issue Compliance Check Evidence Validation Art. 185(a) of Reg. (EU) 575/2013 Institutions shall have robust systems in place to validate the accuracy and consistency of rating systems, processes, and the estimation of all relevant risk parameters. The internal validation process shall enable the institution to assess the performance of internal rating and risk estimation systems consistently and meaningfully. R A credit risk model is subject to a robust system of validation: • initial validation upon development; • backtesting validation on an annual basis; • annual review. Art. 185(b) of Reg. (EU) 575/2013 Institutions shall regularly compare realised default rates with estimated PDs for each grade and, where realised default rates are outside the expected range for that grade, institutions shall specifically analyse the reasons for the deviation. R The realised vs. predicted PD estimates are regularly compared and analysed through: • backtesting on an annual basis; • backtesting validation on an annual basis: • annual review. Regulatory Framework for Credit Risk Regulatory Framework for Credit Risk • TRIM: Regulation Issue Compliance Check Evidence Data Requirements Art. 7(a)(b)(c) of TRIM Institutions are expected to establish a complete framework which assesses the quality of the data considered for use in the modelling and risk quantification process including: (a) Its completeness and appropriateness; (b) The soundness of the process for vetting data inputs (especially with regard to missing data, outliers and categorical data); (c) The representativeness of modelling data. £ Who assesses data completeness and appropriateness? Who assesses the soundness of the process for vetting data inputs and the representativeness of the modelling data? Regulatory Framework for Credit Risk • TRIM: Regulation Issue Compliance Check Evidence Data Requirements Art. 7(a)(b)(c) of TRIM Institutions are expected to establish a complete framework which assesses the quality of the data considered for use in the modelling and risk quantification process including: (a) Its completeness and appropriateness; (b) The soundness of the process for vetting data inputs (especially with regard to missing data, outliers and categorical data); (c) The representativeness of modelling data. R Who assesses data completeness and appropriateness? • Data Management Team • Validation Team Who assesses the soundness of the process for vetting data inputs and the representativeness of the modelling data? • Validation Team Regulatory Framework for Credit Risk • TRIM: Regulation Issue Compliance Check Evidence Drivers for risk differentiation Art. 18 of TRIM Institutions should ensure that there are no overlaps in the range of application of different models and that each obligor or facility to which the IRB approach should be applied can clearly be assigned to one particular rating system. R CASE: Slotting Model Within the corporate exposure class, institutions shall separately identify as specialised lending exposures: • (a) the exposure is to an entity which was created specifically to finance or operate physical assets(SPV); • (b) the contractual arrangements give the lender a substantial degree of control over the assets and the income that they generate; • (c) the primary source of repayment of the obligation is the income generated by the assets being financed. Regulatory Framework for Credit Risk • TRIM on Internal Validation: • In the context of rating systems, the term “validation” encompasses a range of processes and activities that contribute to an assessment of whether ratings adequately differentiate risk, and whether estimates of risk components appropriately characterise the relevant aspects of risk. • The main role of the internal validation function is to ensure an adequate quality of the rating systems and their compliance with the relevant requirements. • The validation process and content are expected to be consistent across rating systems. However, it is not expected that institutions develop a unique validation process, as the relevant tests may differ from one rating system to another. • The institution should ensure that any statistical tests or confidence intervals used by the bank are appropriate from a methodological point of view (or sufficiently conservative). • Overrides should not only be monitored but also assessed as part of the validation process on an annual basis. Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested (with case studies)? Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested (with case studies)? Credit Risk Modelling • Definition of a Model: • “(…) the term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.” – Federal Reserve System Input • Assumptions • Data • Scenarios • Expert Judgment Process • Methodology • Implementation • Calculation Engine • Aggregation Use • Quantitative Estimates • Forecasts • Limitation • Management decision support Credit Risk Modelling • Definition of a Model: Input • Assumptions • Data • Scenarios • Expert Judgment Process • Methodology • Implementation • Calculation Engine • Aggregation Output • Quantitative Estimates • Forecasts Credit Risk Modelling • Expected Credit Loss (ECL) • IFRS 9 introduces a forward-looking ECL model. Banks will now be required to consider historic, current and forward-looking information (including macro-economic data) in the credit risk modelling. Risk Parameter Basel Model IFRS 9 Model Measurement timespan • 12 months average PD. • 12 months PD; or • Remaining life of the underlying exposure. Look-back period • Long-run average-based PD • PIT PD Reflection of stressed conditions • Downturn LGD; • Reflection of a significant stress period • Current LGD • Forward-Looking PD/LGD • Reflection of economic conditions Recovery cost • Direct and indirect costs • Only direct costs EAD historical data • 5 years for retail exposures • 7 years for sovereign, corporate and bank exposures • No specific requirements Credit Risk Modelling • Expected Credit Loss (ECL) • Existing credit risk models can be used to feed into the ECL, however the IFRS 9 gaps should be eliminated: IFRS 9 Gap Description Gap Assessment Unbiased The ECL model must reflect an unbiased and probability-weighted amount that is determined by evaluating a range of possible outcomes. The models should provide the most accurate and unbiased estimates of default probability (PD models), loss rates (LGD models) and exposure amounts (EAD). • Backtesting reveals that the model produces estimates that are significantly above or below the predictions for the whole portfolio; • Model is designed with a built-in bias; • There is no alignment of model estimates with actual observations; • Model contains built-in floors and adjustments; • Model has specification bias. Credit Risk Modelling • IFRS 9 Gaps IFRS 9 Gap Description Gap Assessment PIT The ECL model must be a relative model to include the assessment of a significant increase in credit risk. • The model specifications do not reflect systemic credit risk; • The model specifications do not explain why the fundamental obligor factors are suitable to capture the absolute default risk; • The model is not subject to periodical empirical tests comparing predictions to actual results. Forward looking The ECL model should use the forward-looking information for the assessment purposes. Especially macroeconomic factors should be incorporated to the extent that these factors are material drivers of the model estimates. • The model relies only on the past information to determine the estimates; • The macroeconomic factors are not incorporated into the model. Credit Risk Modelling • IFRS 9 Gaps: • Model Documentation: • Backtesting Report • Model Development Report • Annual Review (previous year) IFRS 9 Gap Description Gap Assessment Term Structure The maximum period for the ECL is the maximum contractual period over which the bank is exposed to credit risk. • The model does not estimate a term structure over the life of a specific transaction; • The model estimate is limited to a 12-month horizon. Credit Risk Modelling • Basel Models: EAD is seen as an estimation of the extent to which a bank may be exposed to a counterparty in the event of, and at the time of, that counterparty’s default. EAD LGD is the share of an asset that is lost when a borrower defaults: Normal Downturn LGD PD is the risk that the borrower will be unable or unwilling to repay its debt in full or on time: TTC PIT PD Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested (with case studies)? Credit Risk Model Validation • Model Validation is responsible for validating the following key aspects of models: • The relevance of the data used; • The model design: theoretical soundness, assumptions and developmental evidence; • The implementation; and • The model performance. • Analyses and tests that should be performed at least on an annual basis: Backtesting Portfolio Analysis Input Data Overrides Other Quant Tests Initialisation stage Strategy definition Methodology and Model Design Implement ation Disseminat ion Monitoring Maintenan ce Credit Risk Model Validation Model Validation takes place at three separate stages within the model lifecycle framework, namely: • Methodology and Model Design; • Implementation; and • Monitoring stages. Model Overview Replication of the Model Review of methodological choices Robustness check Model Design Model Risk Regulatory Compliance Backtesting Model performance Model acceptance Monitoring Credit Risk Model Validation Model Risk: Model risk occurs primarily from three sources: • On the input: The quality of model outputs depends on the quality of input data and assumptions, and errors in inputs or incorrect assumptions will lead to inaccurate outputs. • On the process: The model may have fundamental errors and may produce inaccurate outputs when viewed against the design objective and intended business uses. Errors can occur at any point from design through implementation. In addition, shortcuts, simplifications or approximations used to manage complicated problems could compromise the integrity and reliability of outputs from those calculations. • On the model usage: The model may be used incorrectly or inappropriately. We need to understand the limitations of a model to avoid using it in ways that are not consistent with the original intent. Credit Risk Model Validation Model Risk Management: Model Risk Rating (MRR) • The MRR process underpins the model risk management. It is an additional analysis to cover model-specific areas. By the industry practice (e.g. RBS, HSBC), the MRR is assessed by the model materiality and model quality. • For Validation: • The Validation Team must ensure that every model has the MRR assigned; • The Validation Team must use the expert knowledge to propose a new MRR at each annual review; • The MRR must be updated on an annual basis in a way that ensures the audit trail of previous MRRs. • Notwithstanding the model risk assessment during the annual review, the MRR can be subject to a change if new relevant factors come to light that have not been covered during the annual review. • The Validation Team must present the MRR during the Model Risk Committee (MRC). Areas of high risk materiality must be investigated and commented during MRC. Credit Risk Model Validation MRR Dimension Description Model Materiality Assessment of the size of the portfolio to which the model is used in terms of EAD and RWA. Assessment of the impact of the portfolio size on the bank’s capital and the influence on other models. Data Assessment if the risk data is collected to a sufficient standard in terms of accuracy and completeness to support current and future model lifecycle events. Investigation of missing and erroneous data items (e.g. decimals, units). Model Specification Assessment if the model is correctly specified (e.g. specified in a way that creates potential risk by basing the model only on judgemental factors or having a Point-in-Time PD model that does not respond to the credit cycle). Model Implementation Assessment if the model itself or a complex system set-up causes the implementation difficulties (e.g. a model is coded into more than one system causing various outputs depending on the system used or a model is overly too complex that prevents the implementation). Model Use Assessment if the model is used correctly. Analysis of the number of overrides (e.g. a high number of overrides caused by the lack of an on-going model training for the new staff). Assessment if the model outputs are misused or extended to other processes beyond the primary approved purpose of the model. Model Performance Assessment if the model is working as expected (e.g. identification of any breaches of the agreed model performance metrics and thresholds, failed backtesting). Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested (with case studies)? Learning Outcome • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested (with case studies)? Backtesting of Credit Risk Models • Backtesting is a model validation process that compares the internal model’s estimates with the actual realised observations. • The purpose of initiating a backtesting exercise is to evaluate the predictive power and performance of a model and how the model performs over time. Backtesting serves to flag model deterioration, when a backtested model starts to underestimate or overestimate risk. • The backtesting process assesses the following characteristics of the internal models: calibration, discrimination and stability. Backtesting Process Validation Focus Calibration Assessment of the deviation of the internal model’s estimates from the realised observations. Discrimination Assessment of the extent to which bad realised observations are assigned low internal model’s estimates and the good realised observations are assigned high estimates. Stability Assessment of the changes to the population over time that affect the appropriateness of the internal model. Backtesting of Credit Risk Models • It should be ensured that the validation team has its own access to the relevant databases that are used to generate the backtesting dataset. • The validation should focus on the following aspects of the backtesting dataset: Portfolio Aggregation Validation Focus Aggregation over initialisation points, not overlapping, fixed time horizons Validation should consider the following points: • Does the observation window increase relevant to the increase in the time horizon? • Is the observation window large enough to guarantee statistically significant results? • Does the observation window include a diversified range of market conditions? • Does the observation window allow for periods of sound model performance in normal macro-conditions to mask bad performance during the period of stress? • Can the resulting data be considered independent with appropriate statistical tests applied? Backtesting of Credit Risk Models Portfolio Aggregation Validation Focus Aggregation over initialisation points, overlapping, fixed time horizons Validation should consider the following points: • Does the observation window increase relevant to the increase in the time horizon? • Does the observation window enhance the ability to discriminate between the good vs. bad model performance due to the overlaps? Aggregation over varying time horizons, fixed initialisation points Validation should consider the following points: • Does the observation window capture model’s performance changes resulting from later recalibrations and reparametrisation of the model? • Does the aggregation lead to the robust assessment of model performance? • Does the observation window enhance the ability to discriminate between the good vs. bad model performance at different time horizons? Backtesting of Credit Risk Models Portfolio Aggregation Validation Focus Aggregation over varying time horizons, varying initialisation points Validation should consider the following points: • Does taking forecasts with different time horizons initialised on a number of dates affect the clarity of results? • Does the aggregation lead to the robust assessment of model performance? • Does the observation window enhance the ability to discriminate between the good vs. bad model performance at different time horizons? Aggregation over credit facilities, risk factors, obligors Validation should consider the following points: • Does the aggregation lead to the increase of default instances or other exceptions for a given observation window? • Does the aggregation capture the way in which dependencies between risk factors, obligors or credit facilities are modelled? Backtesting of Credit Risk Models • Calibration – Empirical Example: Brier Score The Brier score is also known as the Mean Square Error (MSE). The Brier score is a common way of verifying the accuracy of the probability forecast that relates to a specific event with the estimated probability known. The Brier score can only be used for the binary outcomes, where there is only two possible realisations: default/non-default. For N obligors with individual PD estimates !" and with the #" being a default indicator that takes the value of 1 for default and the value of 0 for non-default, the Brier score takes the following formula: MSE = $ % ∑(#" − !")* [1] The Brier score is small for the high PD estimates assigned to defaults and low PD estimates assigned to non-defaults. The low Brier score indicates the good calibration of a rating model. The Brier score can only tell about how accurate the forecast was, but cannot inform about the accuracy of the forecast if compared with anything else. The validation team should look at the specific aspect when validating the calibration by using the Brier score: • Is the obtained score low enough to conclude about the model’s accuracy of predicting defaults? Backtesting of Credit Risk Models • Calibration – Empirical Example: Brier Score Specifically for the TRIM exercise, the validation team is required to have pre-defined actions in place that correspond to certain values of the Brier score. Traffic Lights for Brier Score Traffic Light Values Description Dark Green 0.0 ≤ Brier score ≤ 0.1 Indication that the model is very well calibrated (conservatively calibrated). Green 0.1 < Brier score ≤ 0.5 Indication that the model is calibrated with no significant divergence between the estimates and realisations. Yellow 0.5 < Brier score ≤ 0.7 Indication that the observed realisations are in line with the model estimates. There is no significant difference, but the model can be recalibrated in order to increase its accuracy. Orange 0.7 < Brier score ≤ 0.9 Indication that the model is weakly calibrated and significant differences appear between the observed realisations and model estimates. Red 0.9 < Brier score ≤ 1.0 The model is wholly inaccurate and the recalibration is needed. Backtesting of Credit Risk Models • Calibration – Empirical Example: Brier Score For the backtesting results of both PD models, the null hypothesis is tested by using the Spiegelhalter’s z-statistic, which allows for a formal assessment of the calibration of the targeted PD models: Z(!,") = ∑$%& ' ()$* +$)(-* .+$) ∑$%& ' -*.+$ /+$(-* +$) [2] The null hypothesis of perfect calibration is rejected at the confidence level “α” if the following applies: Z(!, ") > 2-* 3/. [3] 2008-2015 Backtesting Dataset (Appendix) Population N PD Model Brier Score B(!,") Spiegelhalter’s Z-statistic Z(!,") P-value < α CI α = 0.05 4153 MidCorp 0.034532 7.524517 0.000199 95% Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q In practice, the discriminatory power of the mid-corporate PD models is assessed by means of the AUROC, CAP and AR. As a rival statistical tool to the aforementioned methods, the Goodman and Kruskal’s Gamma is used to validate the discriminative power of a PD model. The Goodman and Kruskal’s Gamma is applied to assess the consistency of allocating good ratings to less risky obligors and bad rating to the highly risky obligors. The Null Hypothesis states that the validated PD model discriminates between highly risky and less risky obligors with the gamma coefficient being applied to test this hypothesis: The value of the gamma coefficient is compared with the test statistics (z-critical from the tables). If the test statistic is higher, then one cannot reject the null hypothesis. The gamma coefficient is specified by the following formula: ! = #$%#& #$'#& [4] Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q ! = #$ − #& #$ + #& Under the TRIM framework, the results of the gamma coefficient should be interpreted in accordance with the predefined levels included in relevant internal policies: Traffic Light Values Description Dark Green 0.8 < ! ≤ 1.0 Very strong discrimination between the good and bad obligors Green 0.6 < ! ≤ 0.8 Strong discrimination between the good and bad obligors Yellow 0.4 < ! ≤ 0.6 Neutral discrimination between the good and bad obligors Orange 0.1 < ! ≤ 0.4 Weak discrimination between the good and bad obligors Red ! ≤ 0.1 Very weak discrimination between the good and bad obligors Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q The gamma coefficient’s z-value is compared with the z-critical values from the tables and, if lower, then one cannot reject the null hypothesis that there is a difference in the populations (the rating discriminates between highly risky and less risky obligors): z = ! ∗ #$% #& '()* +,) [5] First Step: 1) Divide the backtesting dataset between the good and bad obligors. How? Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q 1) Divide the backtesting dataset between the good and bad obligors. How? • You need a cut-off point! • the distinction between the good and bad obligors is usually set for the cut-off point for the non-investment grade bonds with the rating below BBB-. In doing so, the obligors with adequate capacity to meet their financial commitments are separated from the obligors that are vulnerable in the near term and facing major on-going uncertainties rendering them inadequate to meet their financial commitments. MidCorp PD model: Rating Non Defaults Defaults Good (AAA to BBB-) 1950 19 Bad (BB+ to CCC) 1485 78 Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q The analysis of the above table reveals the following facts placing the gamma coefficient in the green flag of the predefined traffic lights: • The number of concordant pairs “N_c” is: 152100 • The number of discordant pairs “N_d” is: 28215 • The Gamma Coefficient “γ” is: 0.687047 MidCorp PD model: Rating Non Defaults Defaults Good (AAA to BBB-) 1950 19 Bad (BB+ to CCC) 1485 78 Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q 1) The Gamma Coefficient “γ” is: 0.687047 but…. • The z-values (6.756) is significantly over the z-critical values from the tables. Thus, there are no grounds to assume that the model discriminates between highly risky and less risky obligors, as the null hypothesis can be rejected. • What do you do then? Clearly the model seems to be OK for the discriminatory power. Yule’s Q • Yule’s Q is the measure of the discriminatory power and association between bad/good differentiations and defaults. • Yule’s Q is always a number between -1 and 1 and constitutes the statistical measure of discrimination for a 2x2 confusion matrix. • Unlike the gamma coefficient, Yule’s Q does not require tests of significance, and hence the TRIM inspection does not insist on providing the proof of robustness for the Yule’s Q findings. Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q However, Yule’s Q should be referenced to predefined standards (e.g. traffic lights): Traffic Light Values Description Dark Green Yule’s Q = 1 Perfect association between the good/bad ratings of the obligors and defaults. The model has very strong discrimination. Green 0.7 ≤ Yule’s Q < 1 Very strong association between the good/bad ratings of the obligors and defaults. The model has strong discrimination. Yellow 0.5 ≤ Yule’s Q < 0.7 Substantial association between the good/bad ratings of the obligors and defaults. The model has good discrimination. Orange 0.3 ≤ Yule’s Q < 0.5 Moderate association between the good/bad ratings of the obligors and defaults. The model has weak discrimination. Red Yule’s Q ≤ 0.3 Negligible or inversed association between the good/bad ratings of the obligors and defaults. The model has very weak discrimination. Backtesting of Credit Risk Models • Discrimination – Empirical Example: Gamma Coefficient & Yule’s Q 1) The Gamma Coefficient “γ” is: 0.687047 but…. 2) The Yule’s Q is 0.687047 indicating a very strong association between the good/bad ratings and defaults (green flag of the predefined traffic lights). Backtesting of Credit Risk Models • Stability – Empirical Example: Population Stability Index (PSI) In practice, the stability of the PD models is assessed by means of the PD rating migration matrix, analysis of the changing macroeconomic conditions and the changing business landscape. The validation team is advised to apply measures of the population stability in order to investigate how much the underlying portfolio of obligors has changed over time rending a credit risk model obsolete and inadequate: PSI = ∑"( $%",'()*'+% − $%",./0.().1% 2 ln 567,89:;8<% 567,=>?=9:=@% ) [6] Where “i” denotes the rating class (e.g. AAA), $%",'()*'+ is the actual distribution of obligors per rating “i” and $%",./0.().1 is the expected distribution of obligors per rating “i” taken from the reference year that is the previous year and/or the development year. Backtesting of Credit Risk Models • Stability – Empirical Example: Population Stability Index (PSI) Under the TRIM framework, the results of the PSI should be referenced to the predefined levels that allow for actionable decisions to be taken: Traffic Light Values Description Dark Green 0.00 ≤ PSI < 0.05 Population remains very stable over time. Green 0.05 ≤ PSI < 0.10 Negligible shift in the population over time. Population remains stable. Yellow 0.10 ≤ PSI < 0.25 Minor shift in the population over time. Population remains relatively stable. Orange 0.25 ≤ PSI < 0.50 Moderate shift in the population over time. Population remains unstable. Red 0.50 ≤ PSI Significant shift in the population over time. Population remains unstable. Backtesting of Credit Risk Models • Stability – Empirical Example: Population Stability Index (PSI) Rating Class 2008 2009 2010 2011 2012 2013 2014 2015 AAA 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% AA+ 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% AA 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% AA- 0.00% 0.20% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% A+ 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% A 0.00% 0.00% 0.20% 0.40% 0.40% 0.00% 0.00% 0.00% A- 0.00% 0.00% 0.00% 0.00% 0.00% 0.20% 0.20% 0.20% BBB+ 0.40% 1.00% 1.00% 0.60% 1.50% 2.40% 1.30% 1.40% BBB 15.10% 12.30% 9.40% 10.60% 12.70% 13.80% 15.20% 15.00% BBB- 14.70% 15.80% 14.00% 16.70% 13.50% 14.60% 12.20% 13.50% BB+ 23.20% 19.20% 19.80% 15.50% 14.20% 19.60% 16.30% 21.10% BB 15.90% 15.20% 15.40% 16.50% 16.30% 14.60% 15.90% 15.30% BB- 13.20% 13.20% 11.20% 11.80% 11.60% 12.00% 15.20% 11.20% B+ 4.10% 2.20% 4.20% 4.50% 4.40% 7.30% 3.60% 5.40% B 1.60% 2.20% 4.20% 2.60% 3.20% 2.20% 1.60% 1.80% B- 0.60% 1.20% 1.20% 0.60% 2.50% 2.20% 2.00% 2.00% CCC 0.40% 0.20% 0.00% 0.80% 0.80% 1.70% 1.40% 1.80% NR 0.20% 0.20% 0.00% 0.00% 0.40% 0.40% 0.00% 0.40% PSI 6.94% 4.30% 3.19% 4.87% 12.07% 8.85% 4.84% Backtesting of Credit Risk Models • Stability – Rating Migration Matrix AAA AA+ AA AA- A+ A A- BBB+ BBB BBB- BB+ BB BB- B+ B B- CCC NR 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 3 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 4 45 14 4 0 0 0 0 0 0 12 0 0 0 0 0 0 0 9 24 15 5 2 0 0 0 0 10 0 0 0 0 0 0 1 5 16 34 14 3 3 0 0 1 4 0 0 0 0 0 0 0 2 3 22 33 9 5 1 1 0 8 0 0 0 0 0 0 0 0 3 10 14 31 7 0 6 2 4 0 0 0 0 0 0 0 0 0 4 1 4 7 0 1 0 3 0 0 0 0 0 0 0 0 0 0 2 1 0 4 0 0 2 0 0 0 0 0 0 0 0 0 1 3 1 1 2 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 4 1 0 0 0 0 0 0 0 7 3 11 2 2 0 1 0 0 14 0 0 0 0 0 1 7 71 63 102 74 53 23 9 10 8 59 Learning Outcome Summary • Credit Risk Management How is credit risk managed within a bank? What are the credit risk mitigation techniques? • Three Lines of Defence Model What is the governance framework for credit risk management? • Regulatory Framework for Credit Risk Management What is the regulatory background for credit risk management? What is the difference between the Standardised, Foundation and Advanced approach to credit risk measurement? • Credit Risk Modelling What are the common models for credit risk: Probability of Default (PD); Loss Given Default (LGD); Exposure at Default (EAD); Expected Credit Loss (ECL)? • Credit Risk Model Validation What is the internal model validation function? What is a credit risk model lifecycle? How is model risk managed by a bank? • Backtesting of Credit Risk Models How are credit risk estimates backtested?