Permanent URI for this collection
Browse
Recent Submissions
Publication Reconstructing high-resolution 2D data from low-resolution inputs using a super-resolution conditional generative adversarial networkAlnabbat, Mohammed; Bernstein, Brett; Zhang, MengliSuper-resolution is an image processing technique that takes a low-resolution image and makes it high-resolution (Nasrollahi and Moeslund, 2014). Recent studies in image processing and medical imaging use Super-Resolution Conditional Generative Adversarial Networks (SRCGANs) to achieve super-resolution, successfully generating high-resolution images that are perceptually indistinguishable from real images (Nasser et al., 2022). We apply the concept of super-resolution to gridded gravity anomalies and train an SRCGAN to learn their complex signal structures and generate high-resolution data from coarsely-sampled grids. Typical generative adversarial networks (GANs) consist of two neural networks, a generator G and discriminator D, which learn and improve by competing with each other. G generates an image and D attempts to determine if it is a real or generated image. G then updates to make the generated output more realistic, and D updates to better distinguish between real and generated images. We adapt the SRCGAN architecture from Ledig et al. (2017) for use with gridded gravity data. Our generator G takes a low-resolution grid X, coarsely-sampled from a full data set Y, as input, and outputs an up-sampled, high resolution grid ˆY . Our network is trained and tested using gridded, regional gravity data from Australia (GeoscienceAustralia, 2023). These fully-sampled data Y are resized to a common shape of 128x128 and down-sampled by a factor of 4 to shape 32x32 to obtain the network inputs X. A total of 3546 pairs of fully-sampled and coarsely-sampled grids are used for training, 799 pairs are used for validation during training, and 88 pairs are reserved to test the network after training. The trained generator is tested with one of the reserved data pairs. Figure 1 shows the fully-sampled grid Y, coarse grid X, and the high-resolution grid ˆY generated from X. We are interested in how well the generator can reconstruct high-resolution signal from just the low-resolution input, so we quantify the increase in information by comparing the differences Y −X and Y −ˆY , histograms of which are in figure 1, their mean absolute error (MAE), and their root mean square error (RMSE). These metrics are summarized in table 1. The difference between the fully-sampled data and the coarse data Y −X produces a high MAE and RMSE and is characterized by a noticeably-broad spectrum of differences. The comparison between Y and ˆY yields lower metrics and a sharpening of the histogram. The improved metrics from the generated high-resolution data show that our trained SRCGAN generator can reconstruct missing signal structure from coarsely-sampled, gridded gravity data. The generated grid is upscaled in size by a factor of 4 in both directions while maintaining the integrity of the input signal. From the histogram of differences, we understand that the generated high-resolution data closely resembles the full data, discrepancies in which are due to the SRCGAN not completely reconstructing the high-frequency content. This may be improved through further adjusting and tuning of the network and the network training process. These results are a promising look into what may be a robust and versatile data reconstruction method.Publication Impact of lossy compression errors on passive seismic data analyses(Colorado School of Mines. Arthur Lakes Library, 2023) Issah, Abdul Hafiz S.; Martin, Eileen R.New technologies such as low-cost nodes and distributed acoustic sensing (DAS) are making it easier to continuously collect broadband, high-density seismic monitoring data. To reduce the time to move data from the field to computing centers, reduce archival requirements, and speed up interactive data analysis and visualization, we are motivated to investigate the use of lossy compression on passive seismic array data. In particular, there is a need to not just quantify the errors in the raw data, but also the characteristics of the spectra of these errors, and the extent to which these errors propagate into results such as detectability and arrival time picks of microseismic events. We compare three types of lossy compression: sparse thresholded wavelet compression, zfp compression, and low-rank singular value decomposition compression. We apply these techniques to compare compression schemes on two publicly available datasets: an urban dark fiber DAS experiment, and a surface DAS array above a geothermal field. We find that depending on the level of compression needed, and the importance of preserving large versus small seismic events, different compression schemes are preferable.