• 3D synthetic aperture for controlled-source electromagnetics

      Snieder, Roel, 1958-; Knaak, Allison; Berger, John R.; Sava, Paul C.; Krahenbuhl, Richard A.; Schneider, Jennifer J. (Colorado School of Mines. Arthur Lakes Library, 2015)
      Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets correctly, which allows use of the method in locations where the subsurface models are built from only estimates. In addition to the technical work in this thesis, I explore the interface between science, government, and society by examining the controversy over hydraulic fracturing and by suggesting a process to aid the debate and possibly other future controversies.
    • Accessed drainage volume and recovery factors of fractured horizontal wells under transient flow

      Ozkan, E.; Yesiltepe, Caglar; Sarak, Hulya; Tutuncu, Azra (Colorado School of Mines. Arthur Lakes Library, 2015)
      The objective of the research presented in this Master of Science thesis is to propose a practical approach to estimate the drainage volume and recovery factors of fractured horizontal wells in tight, unconventional reservoirs under economic constraints. For conventional wells, economic depletion of a given drainage area is mainly dictated by physical depletion. For fractured horizontal wells in unconventional reservoirs, however, economic depletion rates are usually reached during transient flow and recovery factors are insensitive to well spacing. A consequence of this phenomenon is the disparity of the observed ultimate recovery from the estimates based on well-spacing considerations, which is also manifested in the inconsistencies of the estimated recovery factors of wells in unconventional reservoirs. Furthermore, economic depletion during transient flow also has implications on more efficient utilization of unconventional hydrocarbon resources. In this work, a contacted reservoir volume (CRV) is defined based on the effective transient drainage area of the well under linear-flow conditions. This definition enables the estimation of physically and economically meaningful recovery factors based on accessable reserves of the well for an economic cut-off rate. Equations to estimate effective drainage areas of fractured horizontal wells under linear and compound linear flow conditions are derived and related to the CRV for a given transient production rate. This approach provides a practical means of optimizing hydraulic fracture spacing along a horizontal well. Example applications of the proposed approach are demonstrated and the results are discussed. The work presented in this thesis does not consider the geomechanical changes caused by stimulation and production from the reservoir. These aspects are left for future studies.
    • Active and passive electrical and seismic time-lapse monitoring of earthen embankments

      Revil, André, 1970-; Rittgers, Justin Bradley; Mooney, Michael A.; Sava, Paul C.; Schneider, Jennifer J.; Smith, Jessica, 1980-; Markiewicz, Richard (Colorado School of Mines. Arthur Lakes Library, 2015)
      In this dissertation, I present research involving the application of active and passive geophysical data collection, data assimilation, and inverse modeling for the purpose of earthen embankment infrastructure assessment. Throughout the dissertation, I identify several data characteristics, and several challenges intrinsic to characterization and imaging of earthen embankments and anomalous seepage phenomena, from both a static and time-lapse geophysical monitoring perspective. I begin with the presentation of a field study conducted on a seeping earthen dam, involving static and independent inversions of active tomography data sets, and self-potential modeling of fluid flow within a confined aquifer. Additionally, I present results of active and passive time-lapse geophysical monitoring conducted during two meso-scale laboratory experiments involving the failure and self-healing of embankment filter materials via induced vertical cracking. Identified data signatures and trends, as well as 4D inversion results, are discussed as an underlying motivation for conducting subsequent research. Next, I present a new 4D acoustic emissions source localization algorithm that is applied to passive seismic monitoring data collected during a full-scale embankment failure test. Acoustic emissions localization results are then used to help spatially constrain 4D inversion of collocated self-potential monitoring data. I then turn to time-lapse joint inversion of active tomographic data sets applied to the characterization and monitoring of earthen embankments. Here, I develop a new technique for applying spatiotemporally varying structural joint inversion constraints. The new technique, referred to as Automatic Joint Constraints (AJC), is first demonstrated on a synthetic 2D joint model space, and is then applied to real geophysical monitoring data sets collected during a full-scale earthen embankment piping-failure test. Finally, I discuss some non-technical issues related to earthen embankment failures from a Science, Technology, Engineering, and Policy (STEP) perspective. Here, I discuss how the proclaimed scientific expertise and shifting of responsibility (Responsibilization) by governing entities tasked with operating and maintaining water storage and conveyance infrastructure throughout the United States tends to create barriers for 1) public voice and participation in relevant technical activities and outcomes, 2) meaningful discussions with the public and media during crisis communication, and 3) public perception of risk and the associated resilience of downhill communities.
    • Agent-based approach to social license durability, An

      Nakagawa, Masami; Bahr, Kyle; Rolston, Jessica Smith, 1980-; Delborne, Jason; Grubb, John W.; Hitzman, Murray Walter; Boutilier, Robert (Colorado School of Mines. Arthur Lakes Library, 2015)
      Public expectations for what constitute responsible practices of mining and other extractive companies have been evolving and becoming evermore complex. As focal organizations struggle to come to terms with increased expectation, tools must be developed to assess performance and, if possible, predict and forecast how their performance will be received by stakeholders in the future. The main purpose of this work is to provide a tool for assessing the current state and longevity of public perceptions of corporations who are already measuring their social performance. This tool should allow managers and other decision makers within a focal organization to plan for and manage social risk to their operations by giving them a sense of the potential social outcomes that a specific project may generate. In addition, it may provide insight to others interested in the social license of a given project, such as governments, NGO's, and individual stakeholders and stakeholder groups. The work provided herein is comprised of an agent-based model of fluctuations in social license to operate through the use of opinion diffusion and stakeholder network creation. Agent-based modeling is a bottom-up approach that explores complex macroscopic phenomena through the implementation of simple microscopic rules for the behavior of individual agents. This method allows researchers to explore and quantify potential outcomes. The model created for this work demonstrates the change in social license for a group of stakeholders with a specific distribution of influence and individual consensus levels. Furthermore, it successfully recreates network structures thought to be associated with different levels of durability of the social license granted by a stakeholder network. These network structures are analyzed for their stability and ability to self-propagate within the model.
    • Algorithms for increased efficiency of finite element slope stability analysis

      Griffiths, D. V.; Farnsworth, Richard S.; Pei, Shiling; Mustoe, Graham G. W. (Colorado School of Mines. Arthur Lakes Library, 2015)
      An algorithm was designed to improve the efficiency of a finite element slope stability program. The finite element software currently uses a bisectional algorithm to arrive at a factor of safety. However, several inefficiencies have been identified in the bisectional algorithm as it is applied to slope stability analysis. Among them are the high computational costs of failing to converge. Several algorithms were proposed to address these inefficiencies, and through optimization and verification, the most efficient of the proposed algorithms has been selected. The Farnsworth-Griffiths Adaptive (FG-Adaptive) algorithm adjusts the size of step it takes in the strength reduction factor based on the number of iterations it takes for the model to converge. This leads to fewer SRF trials and ultimately to faster solutions.
    • Analysis and dynamic active subspaces for a long term model of HIV

      Pankavich, Stephen; Loudon, Tyson S.; Collis, Jon M.; Constantine, Paul G. (Colorado School of Mines. Arthur Lakes Library, 2015)
      The Human Immunodeficiency Virus (HIV) disables many components of the body's immune system and, without antiretroviral treatment, leads to the onset of Acquired Immune Deficiency Syndrome (AIDS) and subsequently death. The infection progresses through three stages: initial or acute infection, an asymptomatic or latent period, and finally AIDS. Modeling the entire time course of HIV within the body can be difficult as many models have oversimplified its biological dynamics in the effort to gain mathematical insight but fail to capture the three stages of infection. Only one HIV model has been able to describe the entire time course of the infection, but this model is large and is expensive to simulate. In this paper, we'll show there are two viral free steady states and conduct a stability analysis of one of the steady states. Then, we'll present a reduced order model for the T-cell count 1700 days after initial infection using active subspace methods. Building on the previous results, we'll create a global in time approximation of the T-cell count at any time using dynamic active subspaces.
    • Analysis of post-wildfire debris flows: climate change, the rational equation, and design of a dewatering brake

      Santi, Paul M. (Paul Michael), 1964-; Brunkal, Holly Ann; Higgins, Jerry D.; Cannon, Susan H.; Nakagawa, Masami; Ozbay, M. Ugur (Colorado School of Mines. Arthur Lakes Library, 2015)
      This dissertation presents the results of two lines of inquiry into the frequency and magnitude of post-wildfire debris flows, and a third investigation into the design parameters for a debris-flow mitigation structure are presented as an advancement of the current body of knowledge on the hazards and risks of post-wildfire debris flows, and the consideration of a potential mitigation design. Increasing areas burned by wildfire and increasing intense precipitation events with predicted climate change will produce a significant increase in the occurrence of post-wildfire debris flows in the western United States. A positive correlation is shown between an increase in wildfire area and number of debris flows. The probability of a debris flow occurring from a burned watershed is influenced by climate change. With conservative model interpretation, post-wildfire debris-flow probabilities for individual drainage basins increase on average by 20.6%, with different climate scenarios increasing the probability of post-wildfire debris flows by 1.6% to 38.9%. A predictive debris-flow volume equation for the Intermountain West is influenced by factors that will be affected by climate change in the coming decades, and debris-flow volumes are calculated to increase with changing conditions by 3.7% to 52.5%. Understanding the future implications of increased incidence of wildfire-related debris flows will help agencies and communities better manage the associated risk. Compilation of a database of debris-flow peak discharges (Q) allowed for a comparison with the expected basin discharge as computed using the rational equation, Q=CIA; where C= an infiltration coefficient, I is the rainfall intensity, and A is the area of the basin. The observed values of Q for debris flows in unburned and burned areas were divided by the computed Q values of runoff using the rational method. This ratio is the ‘bulking factor’ for that debris-flow event when compared with water flooding. It was shown that unburned and burned basins constitute two distinct populations for debris–flow bulking, and that the bulking factors for burned areas are consistently higher than for unburned basins. Previously published bulking factors for unburned areas fit the dataset in about 50% of the cases. Conversely, the bulking factors for burned areas that were found in the published literature were well below the increases seen in over half of the cases investigated in this study, and would result in a significant underestimation of the peak discharge from a burned basin for the given rainfall intensity. Peak discharge bulking rates were found to be inversely related to basin area. Knowledge of the potential increase to the peak discharge from a basin during a debris flow event will help workers better design conveyances and thus will reduce risk to proximal infrastructure. While the first two studies address gaps in knowledge for design events, the third study considers the design elements in the debris –flow mitigation process. The investigation looks specifically at a mitigation structure whose design elements are not well-documented in the literature. A small-scale flume experiment was conducted to assess the design considerations for a horizontal dewatering debris flow brake. A design sequence, which was previously unavailable in the published literature, is developed from comparison to other mitigation design strategies and from results of laboratory flume experiments. It is concluded that the most important input parameters into the design of a debris-flow dewatering brake are the expected thickness of the debris flow deposit and the channel shape. The volume of debris that can be stopped and stored by this mitigation design is a function of the debris flow depth and the channel slope. The thickness of the debris that is arrested on the grate depends on the depositional properties of the debris-flow mass, such as the unit weight of the material, but was not affected by volume of the debris available. The ideal brake is a free-draining surface with an aperture smaller than the D90 value for the debris-flow grain size distribution. An easily implemented design that could be rapidly installed in the channel to reduce the velocity and volume of a debris flow is the goal of this dewatering structure. This design has the potential of being implemented in recently burned areas to reduce the debris-flow risk to areas downstream. Evaluation of the hazard posed by and the potential risk of, a debris-flow event involves many variables. Two variables in risk assessment are the likelihood of an event happening and the severity of that event, in addition to the likely extent of the losses if a particular event takes place. This research shows that there is an increase in the likelihood of post-wildfire debris flows happening with climate change, and that the magnitude of debris –flow events, with respect to the peak discharge measurement, is more severe in a post-wildfire setting than in an unburned basin. Post-wildfire debris flow risk could be reduced for communities at the wildland urban interface by the implementation of a horizontal debris-flow dewatering brake.
    • Analysis of synchronous machines with bypassed coils using FEM-based modeling software

      Sen, Pankaj K.; Redmon, Moshe Jeffrey; Patterson, Shawn; Ammerman, Ravel F. (Colorado School of Mines. Arthur Lakes Library, 2015)
      This thesis examines the viability of analyzing large synchronous machine performance with bypassed stator coils using modeling software. Defined operating points of an existing hydro-electric generator with bypassed coils are simulated using Finite Element Method (FEM)-based electromagnetic modeling software using a 2-D model of the generator consisting of the rotor and stator core, rotor poles, field winding, damper bars, and stator coils. Separate electrical circuits for the field winding, the damper winding, and the stator circuits are also defined to simplify the model. The resulting model is then used to simulate a machine at rated operating conditions without bypassed coils and then at various operating conditions with bypassed coils. The results for the model are further analyzed and compared to corresponding measured field data to assess the accuracy of the model.
    • Analytical investigation of boundaries in naturally fractured unconventional reservoirs, An

      Ozkan, E.; Greenwood, Judson T.; Miskimins, Jennifer L.; Tutuncu, Azra (Colorado School of Mines. Arthur Lakes Library, 2015)
      This research presents a heuristic approach to develop an analytical model to study the effects of a stimulated zone in a fractured unconventional reservoir and the inherent boundaries that are observed. To simulate a stimulated reservoir volume (SRV) around the fractured horizontal well surrounded by a virgin outer reservoir, three separate solutions are generated and superimposed: Solution 1 - a multiply fractured, horizontal-well in an infinite-acting, homogeneous reservoir with the properties of the outer zone; Solution 2 - a multiply fractured, horizontal-well in a bounded, homogeneous (un-fractured) reservoir with the properties of the outer reservoir; and Solution 3 - a multiply fractured, horizontal-well in a bounded, naturally fractured reservoir with the properties of the stimulated zone. The solution for the composite reservoir consisting of a stimulated (naturally fractured) reservoir surrounded by an infinite acting, un-fractured (virgin) reservoir is obtained by subtracting Solution 2 from Solution 1 and then adding Solution 3. The same approach is also applied to develop a solution for the case where there is an additional transition zone between the SRV and the outer (virgin) reservoir. This method creates an approximate solution for the composite-reservoir system. Although the model is derived analytically, computations require numerical methods and the model is therefore referred to as semi-analytical. The model is verified against literature models and an industry numerical simulator to find its limitations. This verification shows that the accuracy of the model is dependent on the size of the stimulated zone. For a large stimulated zone, because the flux profiles along the boundaries of the fractured (Solution 3) and un-fractured (Solution 2) reservoirs are not equal, the model over-predicts the drawdown pressure. However in real-world examples of multiply fractured horizontal wells, the stimulated zone is much smaller and the model closely matches the drawdown pressures calculated in the numerical simulator. Therefore, the heuristic approach used in this work leads to an ad-hoc solution for the common configurations of fractured horizontal wells in shale reservoirs. Several synthetic examples are considered to show that the solution developed in this work can be used to identify the flow regimes after the effect of the stimulated reservoir boundary (that is, the fracture tip effects) are felt. This is an advantage over the commonly used trilinear model when the diffusivities of the stimulated and virgin reservoirs are comparable. Although not explored in this research, the ultimate utility of the proposed approach is in modeling multiple fractured-horizontal-wells to study the interference among SRVs. The fracture enhancement and extent influence the productivity of the well more than any other parameter and should be of utmost importance to characterize. And last, this model can be used with other tools to identify optimal full field development.
    • Analytical solution and numerical modeling study of gas hydrate saturation effects on porosity and permeability of porous media, An

      Zerpa, Luis E.; Gao, Fangyu; Koh, Carolyn A. (Carolyn Ann); Yin, Xiaolong (Colorado School of Mines. Arthur Lakes Library, 2015)
      A gas hydrate is a type of crystallized compound formed by small gas molecules and water under high pressure and low temperature. Natural gas hydrate reservoirs exist mostly in offshore areas of outer continental margins, and some also occur in permafrost areas. Worldwide methane content in gas hydrate accumulations has an estimated volume ranging from 500 Tcf to 1.2 million Tcf. Economic values of these gas hydrate reservoirs are tremendous. A better understanding of the properties of gas hydrate-bearing reservoirs could lead to the development of novel safe and economic production methods. There are two major deposition types of gas hydrate in porous media: pore filling and grain contact cementing-not including those related to geomechanics effect during research of hydrate- bearing reservoirs. The difference between these two distribution types is related to a nucleation condition at the beginning of hydrate cluster formation. Previous work from Verma and Pruess, 1988, shows that hydrate distribution in the pore volume and pore throat depends on the length difference for correlation between the porosity and permeability change in porous media. The previous correlation only considers the contact cementing deposition type. This thesis focuses on the study of correlation models between permeability and porosity changes during formation and dissociation of gas hydrates in porous media, called the permeability adjustment factor. A series of equations has been developed based on consideration of different parameters in the correlation, such as the power factor and critical porosity. In previous permeability adjustment factor equations, permeability is calculated with the permeability equation based on a tubes-in-series model-only considering the contact cementing deposition type. This research first derives the permeability equation for the pore filling deposition type. To give a more comprehensive equation considering both types of deposition, MATLAB has been used to correlate permeability and porosity. This resulted in equations that involve a combination of pore filling and contact cementing deposition types. Finally, the TOUGH+Hydrate, T+H, numerical simulator was modified to include these equations to show the different results of production and saturation results for a depressurization process.
    • Application of mobile robotics concepts to industrial automation

      Turner, Cameron J.; Chee, Matthew C. T.; Steele, John P. H.; Blacklock, Jenifer; Newman, Alexandra M. (Colorado School of Mines. Arthur Lakes Library, 2015)
      The author proposes a methodology to construct reconfigurable flexible manufacturing systems by enabling individual components to make localized decisions influenced by optimization algorithms. In order to evaluate this methodology, a simulation platform, the Networked Autonomous Automation System Simulator (NAASS), was created and validated with a physical setup, the Basic Automation and Robotics Demonstration (BARD). The NAASS was used to evaluate the performance, system robustness and system scalability of a variety of material transportation systems under different scheduling optimization paradigms (SOPs). Results showed that the SOPs with inbuilt intelligence were more robust and outperformed those that lacked local decision-making capability under both stochastic and deterministic conditions. Investigation into the possibility of system scaling yielded evidence indicating that the relationship between the scaled and unscaled systems was both complex and non-linear. Attractive avenues for future research that would yield applicable advances to industry were also identified.
    • Application of space-time structured light to controlled high-intensity laser matter interactions in point and line target geometries

      Durfee, Charles G.; Meier, Amanda K.; Squier, Jeff A.; Ruskell, Todd G., 1969-; Scales, John Alan; Pankavich, Stephen (Colorado School of Mines. Arthur Lakes Library, 2015)
      With ultrashort laser pulses, nonlinear effects can be observed with low energy in each pulse. The broad bandwidth that makes it possible to produce a short pulse also introduces new degrees of freedom for manipulating the beam. The different frequency components that make up the bandwidth can be thought of as individual Gaussian beamlets that travel at different directions through optical elements due to dispersion or spatial chirp. These distortions are usually minimized in laser alignments yet manipulation of the of the Gaussian beamlets can be useful in axial localization and pulse front tilt (PFT), which is called simultaneous spatial and temporal focusing (SSTF) and is useful in many applications. In order to use SSTF, we need to characterize the pulse in both the spatial and spectral domains. We have developed a novel Sagnac shearing interferometer which combines spatial and spectral interference. The combination of divergence and spatial shear results in a local angle between the beams which can be extracted from the interference pattern using spatially resolved spectral interferometry by Fourier analysis. A spatial inversion allows our design to be extended to characterize a coupled spatio-temporal distortion, spatial chirp. We have developed techniques to control relative PFT for focused beams. We use a single pass grating compressor as a passively stable pump-probe experiment stemming from diffractive optics, where the +/-1 diffracted orders from a transmission grating pair are focused by an off-axis parabola which crosses the pump beams to form an index grating at the focus that is probed by the zero order. The experiment can be aligned for overlap of the pulse front tilt across the entire focal spot. This PF overlap can be applied in nonlinear mixing processes, such as harmonic generation or four wave mixing, to characterize semiconductor samples or ionization dynamics. We have also extended SSTF to a cylindrical geometry in Bessel-Gauss and vortex beams. Our novel setup for producing radial SSTF double passes a Gaussian beam through an axicon to produce a collimated ring beam that is then focused to a Bessel zone. Including a vortex mask in the beam gives a phase singularity on axis which creates a higher order Bessel-Gauss, corresponding to the vortex mode order. Circular gratings were also designed to extend SSTF to a cylindrical geometry as well as utilize the pulse front matching technique mentioned above. The Bessel zone with vortex singularity allows for high intensity walls that ionize causing the index of refraction to be higher in the core than the cladding, therefore allowing beam guiding. We modeled the waveguide geometry to optimize mode coupling. Radial SSTF could be used to guide high intensity beams with application to guide high harmonics.
    • Application of waveform tomography at Campos Basin field, Brazil

      Davis, Thomas L. (Thomas Leonard), 1947-; Pedrassi, Mauricio; TSvankin, I. D.; Behura, Jyoti (Colorado School of Mines. Arthur Lakes Library, 2015)
      Campos Basin field has been continuously characterized inside the Reservoir Characterization Project (RCP). Past research includes poststack and prestack joint inversions of PP and PS data which increased the reservoir resolution and could predict a porosity map. To further improve characterization of the Campos Basin field, waveform tomography (WT), or full waveform inversion (FWI), is performed for a 2D line from the 2010 ocean bottom cable (OBC) data under a 2D acoustic isotropic medium assumption. The goal is to bring high resolution and accuracy to the P wave velocity model for better quality reservoir imaging. In order to achieve the best results the application of WT to the 2D dataset required defining the suitable parameters to these data, where the main options in the inversion are the type of objective function, the time domain damping, and the frequency discretization. The waveform inversion has improved the final velocity model, as verified by migrated images showing more continuous and focused horizons at the reservoir depth. The improved seismic image and velocity model are possible inputs, respectively, to a new geological interpretation and to acoustic/elastic attributes inversion. However, only the background velocity was updated and the inversion failed in enhancing the resolution of the final velocity model. Waveform inversion was also performed on synthetic dataset with larger offsets generated for a reliable velocity model of the Campos Basin field. The combination of a larger offset with a waveform inversion strategy that includes amplitude and phase residuals in the objective function proved to be efficient in increasing the resolution of the final velocity model. Synthetic modeling suggests that if a new seismic acquisition program is conducted over the field, it would be highly beneficial for velocity analysis and reservoir characterization to acquire longer offsets.
    • Artificial maturation of oil shale: the Irati Formation from the Paraná Basin, Brazil

      Prasad, Manika; Gayer, James L.; Young, Terence K.; Andrews-Hanna, Jeffrey C.; Boak, Jeremy (Colorado School of Mines. Arthur Lakes Library, 2015)
      Oil shale samples from the Irati Formation in Brazil were evaluated from an outcrop block, denoted Block 003. The goals of this thesis include: 1) Characterizing the Irati Formation, 2) Comparing the effects of two different types of pyrolysis, anhydrous and hydrous, and 3) Utilizing a variety of geophysical experiments to determine the changes associated with each type of pyrolysis. Primary work included determining total organic carbon, source rock analysis, mineralogy, computer tomography x-ray scans, and scanning electron microscope images before and after pyrolysis, as well as acoustic properties of the samples during pyrolysis. Two types of pyrolysis (hydrous and anhydrous) were performed on samples cored at three different orientations (0°, 45°, and 90°) with respect to the axis of symmetry, requiring six total experiments. During pyrolysis, the overall effective pressure was maintained at 800 psi, and the holding temperature was 365°C. The changes and deformation in the hydrous pyrolysis samples were greater compared to the anhydrous pyrolysis. The velocities gave the best indication of changes occurring during pyrolysis, but it was difficult to maintain the same amplitude and quality of waveforms at higher temperatures. The velocity changes were due to a combination of factors, including thermal deformation of the samples, fracture porosity development, and the release of adsorbed water and bitumen from the sample. Anhydrous pyrolysis in this study did not reduce TOC, while TOC was reduced due to hydrous pyrolysis by 5%, and velocities in the hydrous pyrolysis decreased by up to 30% at 365°C compared to room temperature. Data from this study and future data that can be acquired with the improved high-temperature, high-pressure experiment will assist in future economic production from oil shale at lower temperatures under hydrous pyrolysis conditions.
    • Assessing computational thinking in Computer Science Unplugged activities

      Camp, Tracy; Rader, Cyndi A. (Cyndi Ann); Rodriguez, Brandon R.; Bridgman, Terry; Painter-Wakefield, Christopher (Colorado School of Mines. Arthur Lakes Library, 2015)
      There is very little research on assessing computational thinking without using a programming language, despite the wide adoption of activities that teach these concepts without a computer, such as CS Unplugged. Measuring student achievement using CS Unplugged is further complicated by the fact that most activities are kinesthetic and team-oriented, which contrasts traditional assessment strategies designed for lectures and individual tasks. To address these issues, we have created an assessment strategy that uses a combination of in-class assignments and a final project. The assessments are designed to test different computational thinking principles, and use a variety of problem structures. The assessments were evaluated using a well-defined rubric along with notes from classroom observations to discover the extent CS Unplugged activities promote computational thinking. The results from our experiment include several statistically significant shifts supporting the hypothesis that students are learning computational thinking skills from CS Unplugged. Student performance across all of the worksheets gave insight into where problems can be improved or refined such that a greater number of students can reach proficiency in the subject areas.
    • Assessing productivity impairment of surfactant-polymer EOR using laboratory and field data

      Kazemi, Hossein; Manrique, E. (Eduardo); Izadi Kamouei, Mehdi; Curtis, John B.; Ozkan, E.; Yin, Xiaolong; Wu, Yu-Shu; Griffiths, D. V. (Colorado School of Mines. Arthur Lakes Library, 2015)
      Surfactant-polymer (SP) flooding is an enhanced oil recovery (EOR) technique used to mobilize residual oil by lowering the oil-water interfacial tension, micellar solubilization, and lowering the displacing phase mobility to improve sweep efficiency. Surfactant-polymer flooding, also known as micellar flooding, has been studied both in the laboratory and field pilot tests for several decades. Surfactant polymer flooding is believed to be a major enhanced oil recovery technique based on laboratory experiments; however, its applications to field has not met the expectations of laboratory results. Successful field applications of SP flooding have been limited because of a number of obstacles, which include the large number of laboratory experiments required to design an appropriate SP system, high sensitivity to reservoir rock and fluid characteristics, complexity of reservoirs, infrastructure required for field implementation, and lack of reliable statistics on successes of field applications. In other words, there are many variables that affect reservoir performance. Traditionally, in SP flooding, a tapered polymer solution follows the injected surfactant slug. However, in recent years co-injection of surfactant and a relatively high concentration of polymer solution have been used in several field trials. Despite significant increase in oil recovery at early times in several surfactant-polymer floods, the increase in oil production period has had short duration followed by significant reduction in oil production. Thus, this research primarily relied on field test data to understand the problem, hoping that an improved solution strategy can be developed for new field applications. Second, current numerical models do not correctly predict the performance of surfactant-polymer floods and tend to over predict. Thus the second objective of this research was to develop a methodology to use combined field and laboratory data in commercial simulators to improve their predictive capability.
    • Assessing the effect of best management practices on water quality and flow regime in an urban watershed under climate change disturbance

      Hogue, Terri S.; Radavich, Katherine A.; McCray, John E.; Siegrist, Robert L. (Colorado School of Mines. Arthur Lakes Library, 2015)
      Urban streams and water bodies have become increasingly polluted due to stormwater runoff from increased urbanization. Improved water quality and reduced flood peaks are the ultimate goals of stormwater management to achieve safe and healthy urban water bodies, with additional benefits of increased green space and increased domestic water supply through potential recycling and groundwater recharge. In this research, Low Impact Development (LID) and Best Management Practices (BMPs) are assessed as natural methods to manage stormwater by applying the EPA System for Urban Stormwater Treatment and Analysis INtegration (SUSTAIN) model. Ballona Creek watershed in the Los Angeles basin (128 square miles with 61% impervious land cover) was chosen as a case study area to more specifically investigate the mechanisms through which different BMP types achieve compliance with water quality regulations, reduce peak flows, and encourage recharge through infiltration. This research illustrates how the characteristics of distinctive BMP types influence compliance and flow regimes. Model results show that infiltration-dominated BMPs reduced the total pollutant load at the outlet, but residual pollutants were more concentrated resulting in worse compliance with water quality standards. However, out of 86,000 acre-feet per year (AFY) of runoff from the whole watershed during the modeled period of 2004-2008, these BMP types infiltrated 66,000 AFY of water (76% of the total) for potential reuse and groundwater recharge, and reduced peak flows of larger storm events up to 60%. Treat and release-dominated BMPs resulted in lower pollutant concentrations and better compliance at the outlet, but higher pollutant loads were observed and only 34,000 AFY was infiltrated (40% of the total), with minimal peak flow reduction. Assessing future changes in precipitation and temperature due to climate variability further illustrated the beneficial and limiting characteristics of the five BMP types. Due to their poor peak flow reduction and infiltration capacity, treat and release BMPs would not provide as much benefit for future climate scenarios in which more intense precipitation events might occur. Stormwater modeling at the watershed scale can ultimately inform strategic BMP selection based on current and future hydrologic characteristics and desired outcomes.
    • Automated characterization of uranium-molybdenum fuel microstructures

      King, Jeffrey C.; Collette, Ryan A.; Diercks, David R.; Keiser, Dennis; Van Bossuyt, Douglas L. (Colorado School of Mines. Arthur Lakes Library, 2015)
      Interpreting the performance of nuclear fuel materials under various irradiation conditions is essential to the qualification of new nuclear fuels. Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating judgment calls that may vary from person-to-person or sample-to-sample. This thesis develops several image analysis routines designed for fission gas bubble characterization in irradiated uranium molybdenum (U-Mo) monolithic-type plate fuels. Electron micrographs of uranium-molybdenum fuel samples prepared by Idaho National Laboratory are used as the reference images for algorithm development using CellProfiler and MATLAB's Image Processing Toolbox. The resulting algorithm cleans the input image through pre-processing and subsequently segments the fission gas bubbles from the fuel sample images. The segmented image is then used to determine the bubble count, calculate the bubble size distribution, and estimate the overall sample porosity. In addition to technique development, the project includes verification and validation of the established image processing algorithm, as well as large-scale data extraction and analysis of a stack of U-Mo sample images. This work demonstrates that it is possible to use automated image analysis to extract meaningful fission product data from micrographs of nuclear fuel. In particular, the largely qualitative and visual inspection based methods often used by fuel performance analysts can effectively be replaced by quantitative methods that are faster, more consistent, and at least as accurate as their manual counterparts.
    • Automatic and simultaneous correlation of multiple well logs

      Hale, Dave, 1955-; Wheeler, Loralee F.; Davis, Thomas L. (Thomas Leonard), 1947-; Li, Yaoguo (Colorado School of Mines. Arthur Lakes Library, 2015)
      Well log correlation is an important step in geophysical interpretation, but as the number of wells increases, so does the complexity of the correlation process. I propose a new method for automatic and simultaneous well log correlation that provides an optimal alignment of all logs, and in addition, is relatively insensitive to large measurement errors common in well logs. First, for any number of well logs, we use a new variant of the dynamic warping method, requiring no prior geologic information, to find for each pair of logs a set of corresponding depths. Depths in one log may have one or more corresponding depths in another log, and many such pairs of corresponding depths can be found for any pair of well logs. Requiring consistency among all such pairwise correlations gives rise to an overdetermined system of equations, with unknown relative geologic time (RGT) shifts to be computed for each log sample. A least-squares solution of these equations, found using the conjugate gradient method, yields for every well log a sequence of RGT shifts that optimally align all well logs. This solution is not unique because aligned logs remain aligned if they are shifted, stretched, or squeezed vertically. Therefore, we constrain these shifts, for each RGT, to have zero mean over all logs.
    • Autothermal reforming of methane for syngas production in a novel ceramic microchannel reactor

      Sullivan, Neal P.; Blakeley, Brandon; Bogin, Gregory E.; Kee, R. J.; Karakaya, Canan (Colorado School of Mines. Arthur Lakes Library, 2015)
      Ceramic microchannel reactors offer significant advantages to current microreactor technology due to the ability to operate at high temperatures and harsh chemical environments while using relatively inexpensive materials and manufacturing processes. Previous research on a novel ceramic microchannel reactor demonstrated a maximum heat exchanger effectiveness of 88% under inert flow conditions and 100% methane conversion at a gas hourly space velocity (GHSV) of 15,000 1/hr under steam methane reforming (SMR) conditions. This work focuses on widening reactor operating conditions by exploring autothermal reforming and catalytic partial oxidation (CPOX) of methane. Effects of both endothermic SMR and exothermic CPOX will be explored on product composition as well as reactor temperatures. Furthermore, reactive side flow rates are increased beyond previous tests, to a maximum GHSV of 75,000 1/hr. Additionally, a computational fluid dynamics (CFD) model is implemented in ANSYS Fluent, which combines fluid flow, heat transfer, and a 48 step heterogeneous chemical mechanism into a three-dimensional model. Investigation into reactor testing with the CFD model allows not only for accurate prediction of reactor performance, but gives insight into the effects of exothermic and endothermic chemistry inside the reactor. Experimental results indicate that the presence of O2 in the fuel stream dominates reactor exhaust temperatures, although varying concentrations of O2 and H2O are shown to influence product composition. The microchannel reactor showed no signs of thermal stress, despite operating at stoichiometric CPOX conditions. Autothermal reforming demonstrated promising results, achieving 90% methane conversion at a GHSV of 75,000 1/hr for a S/C ratio of 2.5 and O/C ratio of 1.