Open letters: Laying bare linguistic patterns in PRA messages using machine learning

David Bholat and James Brookes

In a recent research paper, we show that the way supervisors write to banks and building societies (hereafter ‘banks’) has changed since the financial crisis. Supervisors now adopt a more directive, forward-looking, complex and formal style than they did before the financial crisis. We also show that their language and linguistic style is related to the nature of the bank. For instance, banks that are closest to failure get letters that have a lot of risk-related language in them. In this blog, we discuss the linguistic features that most sharply distinguish different types of letters, and the machine learning algorithm we used to arrive at our conclusions.

Placing supervisory correspondence in the context of the PRA’s supervisory approach

The most important, regular written communication from the PRA to banks it supervises is a letter sent after annual Periodic Summary Meetings (PSMs). At a PSM, PRA staff responsible for the day-to-day supervision of a bank present to a senior management panel, giving their view on the most material risks which that bank poses to the PRA’s objectives. After the PSM, a letter is drafted to the bank’s board conveying these key risks, and setting out actions which the PRA expects the bank to take to mitigate them.

Figure 1 shows the material risks the PRA highlighted to banks in PSM letters sent during 2015, as gauged by section headings. These run from capital adequacy issues to key person risk and bank specific topics.

Figure 1: PSM section headings in 2015

Bank characteristics discussed in the Periodic Summary Meeting 

Bank Category

As part of the PSM, supervisors propose, and the senior panel decide, scores summing up a bank’s ‘Category.’ A bank’s categorisation essentially reflects its size and complexity. Category 1 banks are the most significant deposit-takers capable of very significant disruption to the financial system should they fail. At the other end of the spectrum, Category 5 banks are deposit-takers which, were they to fail, would cause almost no disruption to the wider system.

Figure 2: Category score

A bank’s category matters because it determines how intensively it is supervised. Category 1 banks have more supervisory resources dedicated to them than others. Thus, in our analysis, we compared letters sent to Category 1 banks with those sent to Category 2-4 banks.  Category 5 banks were excluded from the analysis because these are largely credit unions subject to a different regime of regulation.

Bank PIF stage

Another part of the PSM meeting involves supervisors and the panel determining a bank’s proximity to failure. This is called a bank’s Proactive Intervention Framework (PIF) stage. PIF stages run 1 to 5, with 1 signifying low risks to the viability of the bank, and 5 a bank that is in resolution.

Figure 3: Proactive Intervention Framework stage (PIF score)

Like Category, a bank’s PIF stage influences how intensively it is supervised by the PRA. All other things being equal, a bank at PIF stage 4, where there is imminent risk to the bank’s viability, will obviously concern the PRA more than a bank at PIF stage 1, where supervisors have judged there is low risk to the viability of the bank.  In our analysis, we compared banks at stages 1 and 2 with those at stages 3 and 4.  Banks at PIF stage 5 were not included in the analysis.

Machine learning letters

In order to see if letters written to different category and PIF stage banks are linguistically different, we used a supervised machine learning technique known as a random forest classifier. This type of classifier is a collection of decision trees whose predictions are averaged.

Here’s how it works.

First, for each decision tree we grew, a different random sample from the full set of letters was drawn.

The algorithm then identifies the most important linguistic features that distinguish letters from one another. We looked at 25 linguistic features, including measures of linguistic complexity, sentiment, directiveness, formality and forward-lookingness (Figure 4). Each tree considers only a random sample of the features from the full set of 25.

Figure 4: Linguistic features


  • financial sentiment score
  • proportion of high risk diction


  • length of letter
  • number of section headings in letter
  • presence of an appendix
  • proportion of acronyms in letter
  • proportion of numerals in letter
  • mean sentence length
  • mean rate of punctuation per sentence
  • mean rate of subordination per sentence
  • mean rate of verbs per sentence


  • proportion of obligative structures in letter
  • proportion of deadlines in a letter
  • proportion of ‘please’ in a letter
  • ratio of sentence-initial ‘please’ count to sentence-medial ‘please’ count
  • ratio of sentence-initial ‘you’ count to sentence-medial ‘you’ count


  • proportion of non-past tensed verbs
  • proportion of future-oriented sentences


  • proportion of local person pronouns in letter
  • ratio of ‘I’ count to ‘PRA’ count
  • ratio of ‘I’ count to ‘we’ count
  • ratio of ‘we’ count to ‘PRA’ count
  • ratio of ‘you’ count to bank count
  • whether the salutation is handwritten or not
  • whether the salutation is to a named individual or not


To make things concrete, consider an example (Figure 5). The decision tree algorithm randomly samples 5 features from the full set of 25 at its first branch– the ones in navy blue. The algorithm then identifies which of these features are significant at distinguishing between letters written to Category 1 banks and those written to Category 2-4 banks. In the hypothetical tree below, the type of salutation (handwritten versus typed) is the most important feature. At the next split, another random selection of 5 features is sampled – the ones in aqua. The algorithm identifies that, for letters with typed salutations, the letter length is important, with longer letters (>1500 words) being sent to Category 1 banks.  This process continues until the algorithm no longer finds any relevant splits.

Figure 5: Example decision tree

After learning to identify the discriminating linguistic features based on a sample, the tree is used to classify the remaining letters that were not in-sample. This helps assess the model’s accuracy. For each of the remaining letters, the algorithm makes a prediction; in this example, whether the letter is to a Category 1 bank or not. For example, if one of these out-of-sample letters had a typed salutation and a letter length greater than 1500 words, it would be predicted to be a Category 1 bank letter (Figure 6).

Figure 6: Predicting the out-of-sample letters

We grew 2,000 trees this way. The result is a ‘random forest’ of decision trees.

For any given letter, the predictions when that observation was out-of-sample are combined to produce a majority prediction for that letter. Then, the accuracy of the random forest is the proportion of times the majority prediction matches reality (Figure 7).

Figure 7: Random forest model and predictions

Because of the randomness of features and letters in any given forest, we repeated the process 100 times and averaged the results. Our model accuracy was typically 90%.


Our random forest model tells us that Category 1 banks receive letters that are linguistically different from letters received by other category banks. Among other findings, we found that letters to these banks are typically longer than those written to lower category banks, and they tend to be more personalised. They are often addressed to a named individual (‘Dear Katy’) and these salutations are often hand-written. Our research also shows that a bank’s PIF stage shapes the way PSM letters are written.

Perhaps most interestingly, we find that the style and substance of supervisory communications has changed post-crisis. Here we compared PSM letters in 2014 and 2015 to letters written by the Financial Services Authority before 2007. PRA PSM letters were more directive, forward-looking, complex and formal than the FSA letters we examined. Shifting from style to substance, we found that PRA letters contained a much higher percentage of bespoke section headings (Figure 8).

Figure 8: Distribution of section heading types for FSA and 2015 PSM letters


Good communication is an important part of banking supervision. Good communication means that the material risks posed by a bank are well-articulated by supervisors to it, along with the actions supervisors expect the bank to take to mitigate them. Our research suggests there has been a step change in how supervisors communicate with banks since the financial crisis. We find supervisors now adopt a more directive, forward-looking and formal approach. Hence a survey taken during 2016/2017 revealed that 91% of firms agreed that they are clear what the PRA’s expectations are as to what it needs to do to address key risks.

David Bholat and James Brookes work in the Bank’s Advanced Analytics area of the Research & Statistics Division.

If you want to get in touch, please email us at or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied.

Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Comments Off on Open letters: Laying bare linguistic patterns in PRA messages using machine learning

Filed under Banking Supervision, Microprudential Regulation, New Methodologies, Text mining

Comments are closed.