Oliver Fox
New Member
- 1
I work for Artificial Labs which is an insurtech company based in the UK aiming to digitise the quote bind and issue process for brokers and underwriters. Recently we had some interest from an insurer who was having difficulties pulling analysis from Claims Bordereaux so I thought I would share what we did.
We recently spoke with several insurers about how they use data within their Bordereaux claims files and, after a little digging, we discovered that they were sitting on a goldmine of claim information in their systems - all ignored because it was non-standardised, unstructured and basically, a mess.
This data is extremely valuable and often goes to waste because it's usually in Excel spreadsheets in Lloyd's and non-Lloyd's formats. If insurers were able to standardise and make sense of this information it would allow them to understand where claims are coming from, better analyse how their business is performing and ultimately improve their underwriting decisions based on data that is available but not standardised.
We're always interested in learning more about our client's pain points and how they could be resolved, so we set out to create a solution to this problem.
What we did
We analysed thousands of Lloyd's standard and non-Lloyd's standard bordereau reports (as Excel files, in different formats and layouts with multiple tabs) from a number of insurers and identified relevant keywords and columns that we wanted to extract.
We used our data extraction tools to structure and standardise the reports, consolidating them into a master table which included hundreds of thousands of rows.
The results
We were able to extract numerous data points including (but not limited to) information related to claim information such as loss description, date of loss and total incurred losses.
On average, we were able to extract over 90% of the data related to key data points.
A generic data pipeline was then built in order to read, find, extract and load the relevant data into a structured database. Our optimised solution is highly scalable with the end-to-end process taking much less than a second per Excel file on a standard laptop.
Once the relevant data was extracted we were able to present this information back to the user, allowing them to analyse large amounts of data very quickly using tools like Tableau or Power BI. An example insight resulting from this process can be seen in the dashboard below:
If you would like to talk to us about creating a proof of concept with your data, please get in touch with Oliver Fox at [email protected].
More information can be found https://artificial.io/
We recently spoke with several insurers about how they use data within their Bordereaux claims files and, after a little digging, we discovered that they were sitting on a goldmine of claim information in their systems - all ignored because it was non-standardised, unstructured and basically, a mess.
This data is extremely valuable and often goes to waste because it's usually in Excel spreadsheets in Lloyd's and non-Lloyd's formats. If insurers were able to standardise and make sense of this information it would allow them to understand where claims are coming from, better analyse how their business is performing and ultimately improve their underwriting decisions based on data that is available but not standardised.
We're always interested in learning more about our client's pain points and how they could be resolved, so we set out to create a solution to this problem.
What we did
We analysed thousands of Lloyd's standard and non-Lloyd's standard bordereau reports (as Excel files, in different formats and layouts with multiple tabs) from a number of insurers and identified relevant keywords and columns that we wanted to extract.
We used our data extraction tools to structure and standardise the reports, consolidating them into a master table which included hundreds of thousands of rows.
The results
We were able to extract numerous data points including (but not limited to) information related to claim information such as loss description, date of loss and total incurred losses.
On average, we were able to extract over 90% of the data related to key data points.
A generic data pipeline was then built in order to read, find, extract and load the relevant data into a structured database. Our optimised solution is highly scalable with the end-to-end process taking much less than a second per Excel file on a standard laptop.
Once the relevant data was extracted we were able to present this information back to the user, allowing them to analyse large amounts of data very quickly using tools like Tableau or Power BI. An example insight resulting from this process can be seen in the dashboard below:
If you would like to talk to us about creating a proof of concept with your data, please get in touch with Oliver Fox at [email protected].
More information can be found https://artificial.io/