Loading...
Near-real-time/real-time methane prediction and visualization in underground longwall coal mining
Demirkan, Doga Cagdas
Demirkan, Doga Cagdas
Citations
Altmetric:
Advisor
Editor
Date
Date Issued
2022
Date Submitted
Collections
Research Projects
Organizational Units
Journal Issue
Embargo Expires
2024-04-22
Abstract
Preventing methane explosions remains a challenging task in underground longwall coal mines. Mine-wide real-time methane monitoring is a must to tackle this challenge. Real-time decision-making is critical to stop possible explosive methane accumulation and prevent accidents. To accomplish this, detecting and monitoring methane emissions and concentrations across the entire mine must be predicted.
Atmospheric monitoring systems are a critical part of monitoring mine ventilation systems in real-time, and the number and location of sensors are essential to detecting any possible hazard. However, sensors can not be installed in the most critical locations, such as the shearer cutting drum and along the longwall face, due to the nature of mining equipment’s movement. Thus sensors provide information for a limited area, and their readings may have delays caused by the sensor response time, gas diffusion rate, and temperature.
Computational fluid dynamics modeling can provide relatively accurate predictions regarding the location of possible explosive gas concentrations; however, it requires significant computational resources and time, which is not conducive to real-time decision-making. Lastly, decision-makers (mining personnel) need to observe and assess predictions to take the required precautions. Although automated shutdown mechanisms can be used in the event of a sudden increase in methane emissions, these are not entirely reliable due to previous accidents. Such mechanisms might also interfere with productivity as a result of unnecessary stops, lost time, and decreased production. Therefore, the predictions need to be visualized in a way that offers the fastest and most accurate response time for personnel.
This dissertation explores, evaluates, and benchmarks suitable artificial intelligence (AI) algorithms for predicting 3D near-real-time explosion hazards. The prediction performance of 10 algorithms is compared using seven datasets and then assessed through classification accuracy. With the best performer chosen, an application-specific methodology is proposed with modified long short-term memory to detect the formation of explosive methane-air mixtures in the longwall face and identify possible explosive gas accumulations before they become hazards. Lastly, the visualization of the predictions is compared in three platforms, namely computer screen, virtual reality (VR), and augmented reality (AR), for accurate methane volume assessment and faster response time.
The best-performing algorithm, Long Short-Term Memory (LSTM), is modified, trained, tested, and validated based on CFD model outputs for six locations of the shearer for similar locations and operational conditions of the cutting machine. The results show that the algorithm can predict explosive gas zones in 3D with overall accuracies ranging from 87.9% to 92.4% for different settings. Depending on the prediction area, output predictions take between 30 seconds to two minutes after measurement data are fed into the algorithm.
Lastly, a user study is designed to investigate the effects of two visual variables, transparency/opacity and complex/simple, to compare different platforms. Four simulations are tested with 30 human subjects. Participants are asked to determine the explosive gas volume. The subjects` time of completion and eye-tracking data are also recorded. Along with the recordings, saliency maps of four simulations are created. Recorded data are evaluated with both quantitative and qualitative analysis. The quantitative analysis assesses the accuracy of explosive gas volume detection and completion time. In addition, the saliency maps of the designed simulations with the subject's eye-tracking data is analyzed to investigate the subjects' attention and reaction in AR, VR, and computer screen. The qualitative analysis assesses subjects` responses to questions about their feelings. The results of the quantitative analysis show a statistically significant difference between the computer screen and VR for the transparent and simple scenario. The subjects` accuracy of volume determination is better in VR. Moreover, the eye tracker data show that subjects spent more time in non-salience regions in VR than on the computer screen. In addition, the qualitative analysis reveals that 90% of the subjects preferred to use VR due to the immersive experience, and 67 % felt more confident in VR when investigating the explosive zones and flagging them as dangerous zones. Finally, the users’ preference ranking of platforms for explosive zone estimation is VR, computer screens, and AR.
Associated Publications
Rights
Copyright of the original work is retained by the author.