Making data-driven decisions :
our customers success
Case study 1: calories analysis
Goal: perform nutrition analysis on 1700 US armed forces recipes in pdf format
Needed to extract nutrition information from 1700 recipes in order to perform analysis (e.g. average calories count, highest fat content, etc)
Tagging: tag the different information to be extracted in a mix of a few documents
The user defines attributes and selects them in a few example recipes. iQC learns from the context to be able to extract the information on a large scale
Test & correct: give feedback to iQC on the results
Process another small sample to see the results. iQC can be taught if it made any mistakes: by confirming the AI choices, or invalidating them and selecting the correct results
Processing: upload the 1700 recipes for processing
iQC is now ready to automatically extract the desired information from the entire batch of documents with a high level of accuracy
Export for analysis
The information is exported to excel for various analysis on the nutritional data (e.g. the 10 most calorific recipes in the US armed forces have 8021 calories on average)
Case study 2: How to read composite logs like a geologist?
Composite logs are useful but unstructured
For geologists, the composite logs are a key document to model the subsurface. They synthesize observations done while drilling and measurement performed in the open hole sections.
Unfortunately, the composite logs are frequently saved under unstructured formats such as PDF or graphic files, therefore, they cannot be used in any digitalized interpretation process unless they are reformated manually.
What an AI has to do to read composites as a geologist?
Reading a composite log is complex. The information to be captured is graphical such as the lithological intervals or the hydrocarbon show symbols but also textual such as the geological descriptions. In addition, the information is displayed along a vertical depth axis to be recognized in order to link together the pieces of detected information
A unique interface and workflow to capture various information
The iQC platform authorizes to train and use computer vision and text analysis tools in the same workflow and in a unique Graphic User Interface.
All the models are built, benchmarked and applied without coding, just by capturing the user experience in the GUI.
Extracted data are made available along a depth axis
iQC not only detects and classifies the graphical and textual information but also links the information together. In the case of composite logs, the unique index linking the lithology, shows and geological description is the well depth. Therefore, all detections are converted to depth prior to being exported using structured formats
Once structured the information becomes actionable data. The lithology and the shows of all the wells of a geological basin can be analyzed to detect patterns which could not be detectable without having access to thousand of lithological intervals, shows or descriptions.
Case study 3: Live access to extracted information using Business Intelligence tools
From unstructured to BI
Business Intelligence tools really help to base decisions on data. But feeding the dashboards with data is a hard task when the data are locked in unstructured file formats.
One of our customers, a North Sea operator has a collection of millions of private and public document about the E&P done in the North Sea over the last 60 years. Most of them are just available as a scan of a paper document.
At first, start to build efficient models ...
Half a million reports and well logs represent a huge diversity of documents. As in any machine learning project, the training phase is key to ensure the models capture this diversity and ensure the success of the project.
This step is made really easy in iQC. A user interface allows the key pieces of information to be highlighted by a Subject Matter Expert. Once done, the model is trained just by hitting a button! No coding, just AI !
... and to valid them!
iQC offer all the Machine Learning metrics to test if the produced models are accurate and discriminant enough to extract the targetted actionable data
It is now time to go to production and to share the results.
The models trained and controlled on few hundreds of documents can now automatically extract and save information from any volume of documents.
The extracted information is saved into a DB and can be shared through a virtualization layer associated to te dashboard tool.
The analysis can now embrace all the organization experience !
The analysis can now handle all the information extracted from the company legacy document and any information can be sourced and verified directly in the documents it comes from.