So, i had my first experience presenting an ePoster at EAGE annual conference last week. To be honest, I was a little bit anxious doing it for several reasons. At first they was no lecture dedicated to the AI for geoscience in the EAGE lecture program (despite most of the talk and demo on the exhibition booth was about AI ...), do I was not sure to have an audience. At second, I had some poor experience with some poster presentation when 20 people group together around a piece of paper with unreadable text printed in a 12pt font!
click on the poster to view it !
But this ePoster session went pretty well finally. At first because of the other presenters in the same session dedicated to #data
Sung-Bin Ahn from TOTAL convinced the audience about the interest of XploBoard, a exploration information platform (BI & GI) his team has developed over the last 3 years. XploBoard allows to share within the full TOTAL E&P community technical and financial data about E&P assets previously only accessible by the few who knew the route in the data silos of such a large company. Data democratisation is no more just a concept at TOTAL, it is really at work. More on earth doc here
Marco Piantanida illustrated pretty well how a simple Jupiter Notebook is now a day to day tool of ENI reservoir engineers to interpret huge volume of DTS (Distributed Temperature Sensor) data made available on the ENI data lake. Thank to the data distribution on hadoop and hive, end-user can plot lare amount of temperature data measured every 5 min with a spacing of 0.5 ft on 5 wells over several years as simply as doing a graph with 20 points in Excel! In addition, the tool box allows any type of alert by email or SMS. We haven't any more alibi not crunching the fiber optic measurements done in the well now! More here.
Raphael Lencrerot explained us how TOTAL uses the seismic data compression in beam migration algorithm. The compression he implemented not only reduces the beam volume by a factor of 13 (253Tb of beams are reduced to 16.1Tb) but allows to read and decompress the compressed data in a shorter time than just reading the original data set. This performance is due to a tricky parallelisation of the aldo on the TOTAL HPC environment. This algo is no ready for a GPU implementation with even better performance. More here.
Within this panel of data-enthousiasts, I did my best to explain that at AgileDD, not only we develop awesome models to index unstructured documents, but also we take care about measuring the performance of the models. The extended abstract is here
Finally this session attracted 25-30 people, as much passionated by geoscience data than the presenters and, thank to the chairing done by Amik St-Cyr from Shell, the discussion was very interesting and I was impressed by the quality of the questions and replies.
So, if next time an ePoster is proposed to me instead of a talk, no doubt I will respond positively.