What could be the best data architecture for a loan loss provisioning reporting system?
Keeping the data unbiased, is the first principle. In the loan loss reporting world, this has three implications – source a rich set of attributes without restricting oneself to only the current scope of reporting. Secondly, acquire granular data even going to transaction level assuming of course that data volumes permit this and thirdly, store historical data. This will ensure, that we future proof loan loss reporting at least to some extent and are equipped to meet business user demands for more insights from the data, furnish drilldowns, etc.
Particularly important in the loan loss reporting scenario is enabling users to interact with the data effortlessly. This means allowing them to perform data corrections, enrichments, validations as self –sufficiently as possible. In order to support this, data needs to be presented in a banker friendly language and most, if not all, data management and transformation functionalities should be UI driven.
Flexibility needs to be provided in terms of Parameterization at various points from identification of non performing behaviour in an asset account to definition to account pools and specification of provisioning levels. To achieve this, the final processed metrics need to be decoupled from data. Hence rules can be changed; managed directly by users to keep pace with business and regulatory considerations without impacting the underlying data layer. Lastly, traceability and auditability need to be maintained as data flows through the various stages of transformation, user validation and correction.
Hence the bank, needs to choose an approach that not only allows for reporting to be automated but also provides the added capabilities required to manage regulatory inspections seamlessly while keeping pace with changing regulations.