Colorado School of Mines: Recent submissions
Now showing items 1-20 of 18741
-
Light-trapping structures fabricated in situ for ultrathin III-V solar cellsThe growth of photovoltaic technology depends on high efficiency and cost reductions. III-V photovoltaics have demonstrated the highest conversion efficiencies, but are limited by their high cost. To reduce the significantly high cost of epitaxy, the absorbing layer can be thinned to form so-called “ultrathin” solar cells. Ultrathin cells have low optical absorption and therefore a lower short circuit current density (JSC). Light-trapping methods can increase absorption and JSC, but nearly all existing methods of fabricating light-trapping structures for III-V materials require ex situ processing that can be time, labor, and cost intensive. In contrast, fully in situ methods of fabricating light-trapping structures support higher industrial throughput and cost reduction by utilizing existing capital equipment, i.e., a growth reactor, to generate the light-trapping structures after device growth without removal from the reactor. This thesis discusses the development of a fully in situ method of fabricating light-scattering structures for III-V materials that utilizes the phenomenon of redeposition during vapor phase HCl etching to generate a rough, textured surface morphology. It is shown that a rough morphology only forms by inducing the hydride-enhanced hydride vapor phase epitaxy growth mode. Compositional analysis identifies the redeposited morphology as highly Ga-rich GaInP, and it is shown that the redeposition is crystalline and has epitaxial registry to the substrate. The kinetic and thermodynamic mechanisms causing redeposition of Ga-rich material are explained. GaAs solar cells with a 270-nm absorber are textured in situ following device growth and the textured devices are compared to planar devices with the same cell architecture. The JSC is 5.1% higher on average in the textured devices, yielding a maximum of 21.84 mA/cm2 with no loss in open circuit voltage or fill factor. This performance enhancement is achieved with only a 60 s treatment, demonstrating the high-throughput viability of this texturing method.
-
Machine learning for network traffic classification under labeled data and training time constraintsThis thesis investigates using machine learning (including deep learning) for network traffic classification when constrained by too little labeled data or insufficient time to train models from scratch. Network traffic classification is essential in network security, network management, and application identification. Labeling network traffic data, however, is often time-consuming and expensive, which limits the amount of labeled data available for training machine learning models. This thesis investigates using a semi-supervised learning approach that leverages positively labeled and unlabeled data to improve classification performance when faced with a lack of labeled data. The method uses a combination of bootstrap aggregation and tree-based classifiers to classify unlabeled network traffic flows from the same class successfully. This same semi-supervised learning approach also successfully detects zero-day (i.e., never before seen) encrypted messaging applications for which no training data is available. Additionally, this thesis investigates using deep transfer learning from a state-of-the-art computer vision model for network traffic image classification. By representing network traffic flows as grayscale network traffic images, highly sophisticated image classification models can transfer to the task of network traffic classification. Using these advanced models as a source for training dramatically enhances the speed at which new models can train, addressing the constraint of having too little time for training. To investigate whether deep transfer learning is successful in network traffic image classification, this work used our network flow capture system (which creates a volume of unlabeled data) and commercial appliances (to turn the unlabeled dataset into a real-world labeled dataset). Experimental results in this thesis demonstrate that the semi-supervised learning technique of positive and unlabeled learning is highly effective at detecting hidden positives amongst unlabeled data. Furthermore, this thesis shows that representing network traffic flows as grayscale images allows state-of-the-art image classification models (e.g., ResNet) to transfer to the domain of network traffic classification effectively.
-
Optimal design and dispatch of hybrid co-generation microgrid systems with resilience considerationsThe decision to supplement conventional energy generation with an on-site microgrid consisting of a mix of distributed generation resources and energy storage devices is motivated by several factors, including reducing costs, increasing sustainability, and improving the resilience and reliability of an energy system. The procurement and operational costs of installing distributed generation are traditionally the impetus behind long-term decisions. While costs remain a key component, decision-makers are now interested in other benefits such as improved resilience and reliability, reduction in emissions, and public sentiment. An open-sourced webtool, REopt, is a mixed-integer linear program that exists to provide users the ability to conduct parametric analysis under many scenarios. Often commercial optimization software struggles to obtain fast and reliable solutions. Therefore we develop a Matheuristic that yields objective function values within 5% of an exogenously produced optimal in fewer than 30 seconds for 90% of test cases compared to only 10% by a traditional optimization solver. We then embellish REopt, to explore the tradeoffs between cost and resilience for a coastal wastewater treatment facility. We find that the facility can reduce life-cycle energy costs by 3.1% through the installation of a hybrid combined-heat-and-power, photovoltaic, and storage system. Furthermore, when paired with existing diesel generators, this system can sustain full load for seven days while saving $664,000 over 25 years and reducing diesel fuel use by 48% compared to the diesel-only solution. Finally, we extend the concepts of the first two works by incorporating emerging technologies (fuel cells) into a distributed generation multi-objective model that simultaneously minimizes costs while ensuring a communities' critical load is satisfied during a utility service disruption. This extension requires the incorporation of challenging non-linear constraints that inhibit state-of-the-art optimization software from finding solutions within 15%, on average, after two hours for realistic instances encompassing five technologies and a year-long time horizon at hourly fidelity. We devise a multi-stage methodology resulting in, on average, an 8% decrease in objective function value. Additionally, solutions obtained using our methodology result in fuel cell utilization three times more often than solutions obtained with commercial solvers.
-
Seismic amplitude fidelity study for quantitative analysis and its relation to stack-based signal-to-noise ratio estimation using the SEAM Arid model, with application to field data from the Powder River BasinThrough my research I develop and implement quantitative tools to assess the seismic amplitude fidelity for quantitative interpretation in several land acquisition designs, and establish a signal-to-noise ratio (SNR) threshold for adequate quantitative analysis in complex near-surface environments. The first part of the research uses data from the synthetic SEAM Arid model, which simulates typical desert environment near-surface features. I use this data from the model to develop the necessary assessment tools of evaluating amplitude fidelity for quantitative interpretation. I investigate the impact of near-surface features and acquisition geometries on amplitude information and how these factors relate to interpretation of structural and amplitude-dependent features. Additionally, I explore whether the stack-based SNR estimations associated with different seismic stacks can indicate the suitability of seismic volumes for quantitative interpretation. The first step of the project plan examined equalizing the amplitude range of the different seismic volumes to facilitate a one-to-one comparison. I then performed conventional seismic interpretation and attribute analysis to identify and map the structural and amplitude-dependent features, mainly targeting two shallow channel systems and two deep shale geobody accumulations. I also conducted quantitative amplitude analysis by calculating the standard deviation of the amplitudes relative to the reference seismic volume with the best overall SNR. This allowed me to evaluate the preservation of amplitude information and identify potential areas where the signal may be compromised. Furthermore, I created a quality factor metric for each seismic volume that assessed how well various seismic attributes can map the four subsurface targets compared to the reference seismic volume. Finally, I correlated the quantitative metrics, i.e., the quality factors and standard deviation of amplitudes, with the SNR volumes to determine whether the latter are indicative of seismic data suitability for robust quantitative seismic interpretation. By combining these methods and procedures, I was able to comprehensively evaluate the seismic data and provide insights into the effects of near-surface features and acquisition geometries on quantitative seismic interpretation. As a result, I provided quantitative metrics for optimizing seismic survey designs tailored to interpreter goals by setting a target SNR value that is sufficient for interpretation, and an acquisition geometry that is suitable for complex near-surface environments. I also determined that effective mapping of structural and amplitude-dependent features requires a minimum SNR of 6 dB, as SNR values below this threshold tend to obscure amplitude information. Moreover, I determined that receiver arrays are superior to single sensor receivers in producing high-quality seismic images and handling near-surface noise and scattering. I alsofound that areas with low standard deviation of amplitudes relative to the reference volume are associated with successful mapping of subsurface features and high SNR values. I also observed that the quality factor metric suggests the significance of dense acquisition designs for accurate mapping of amplitude-dependent features in shallow targets, whereas all tested acquisition geometries were proficient in mapping deeper targets due to high reflectivity. Finally, I observed a correlation between the stack-based SNR estimations and the quantitative metrics, demonstrating that stack-based SNR provides valuable insights into the suitability of seismic data for robust quantitative seismic interpretation. In the second part of the research, I implemented some of the developed assessment methods on Non-Uniform Optimal Sampling (NUOS) field data from the Powder River Basin. I tested three different seismic angle stacks of the same study area. I first estimated a cross-correlation-based SNR over four well locations and two depth levels. I then calculated the amplitude standard deviation of the different seismic stacks in relation to the synthetic seismograms at the four well locations. Finally, I evaluated the performance of the different seismic stacks in predicting the reservoirs’ sand presence and their porosities using the results from the P-impedance inversion volumes. As a result, I determined that the NUOS method has likely not compromised the amplitude fidelity for characterizing the main reservoirs, and that the high noise level in the eastern part of the survey was mainly due to active drilling and completion operations, not due to insufficient seismic sampling.
-
Motion planning with task scheduling in heterogeneous computing systemsMotion planning is an important problem in many contexts of robotics. Heterogeneous computing systems in robots are able to run tasks on different processing units in varying orders, but with different impacts on the robot's state and performance. Currently existing sampling-based motion planning frameworks explore a state space through typically random sampling to create a path to a goal region, but only consider physical obstacles in their way such as walls, and do not consider the constraints of computational requirements on the path or the impacts of choosing different schedules for computation. We introduce a novel system which uses Petri nets as a modeling system on the computational requirements, and uses constraint solvers to find computation schedules for the motion planning tasks. This allows us to select motions based not only on their physical validity, but also computation-related parameters. We subdivide a space of constraints on the system into regions, enabling schedule reuse in order to improve the algorithm's efficiency. We also discuss the use of Petri nets to model another aspect of computation in a heterogeneous environment, memory contention. Our system enables us to consider physical dynamics such as heat and power in a way that prior systems are not capable. We demonstrate that our system can handle a variety of constraints of different severities, and can avoid computational obstacles more effectively than na\"ive planning systems which do not consider computational constraints and instead disallow regions as though they are physical obstacles.
-
Exploring multi-wavelength single-shot Fourier ptychography (SSFP) as a technique for ultrafast pulsed laser (UPL) characterizationUltra-fast pulsed laser (UPL) applications are reaching beyond the expert realm of research and gaining relevance in industrial manufacturing for high-precision material processing. This transition makes it all the more important for simple and affordable UPL characterization methods to be developed. My work sets the stage for the implementation of a new, low-cost method for phase and amplitude retrieval in broadband spatio-temporal signals. Single-shot Fourier ptychography uses Fourier properties and analysis to translate between the intensity profile and spectra of a series of uniquely filtered copies of an input field. The sophisticated computational software is paired with a modified, simple 4f imaging system, to collect all of the necessary data in a single ``shot." A pulse can be represented by a superposition of electro-magnetic waves, truncated in time. By the Heisenberg uncertainty principle, this broadens the spectrum, comprising a continuum of wavelengths within a bandwidth, which is often represented as a distribution of multiple discrete wavelengths in a histogram. By successfully retrieving both the phase and amplitude of two discrete wavelengths using SSFP, I present my work as a proof-of-concept toward an exciting new way for non-experts to characterize UPLs. In this thesis I will introduce readers to computational imaging via Fourier ptychography, and provide the details regarding system design, algorithm functionality, laboratory implementation, single and multi-wavelength reconstruction results and suggested next steps. With the groundwork laid, I hope to inspire others to demonstrate successful UPL reconstructions with SSFP.
-
4D simultaneous PP-PS pre-stack inversion: the Edvard Grieg field, Norwegian North SeaThe Edvard Grieg field was discovered in 2007 in the Norwegian North Sea and in an active development phase as operator Aker BP and partners, OMV and Wintershall DEA, monitor the production and injection. Production began in November 2015, and water injection began in July 2016. Three OBC 4C seismic surveys were acquired in 2016, 2018, and 2020. The oil bearing reservoir is composed of composed of aeolian sands, fluvial sands, alluvial conglomerates, and shallow marine sands and bounded by unconformities as most of deposition occurred subareally. The reservoir is capped by a regionally extensive chalk unit. This thesis conducts conventional PP pre-stack inversion and joint PP-PS pre-stack inversion on time-lapse seismic data to improve the S-impedance. Introducing the converted wave improves both P- and S-impedances in our final results as joint inversion shows that the S-impedance is not sensitive to saturation changes. We introduce a new workflow which converts impedances to reservoir dynamic properties. We extracted the maximum values from 4D effects to calibrate to pressure and saturation changes. By calibrating the results of inversion to reservoir dynamic properties, this can lead to more detailed conversations among asset teams as they determine reservoir compartmentalization, monitor injected fluids, identify un-swept areas of the field, and alter depletion plans. From the calibration process, there were four identified 4D changes: saturation increase, saturation decrease, pressure increase, and pressure decrease. Using five specified cases, the observed 4D effects were classified as one of the 4D changes. Between 2016 and 2020, the water saturation (saturation increase) of the field has increased on average from a maximum of 38% to 45%. The pressure decrease responds to production and grows in relation to development over the years. With the addition of a new seismic vintage since this project was last analyzed, the geologic interpretations were updated. Previously, the interpretation identified a baffle blocking water flow between two injectors. This research found contradictory evidence as the 2020 survey shows water breakthrough, indicating this area is a zone of lower permeability than the surrounding areas.
-
Effects of extended surfaces in narrow channel fluidized bed heat exchangersConcentrating solar power (CSP) coupled to high-temperature thermal energy storage (TES) and efficient power cycles offers a promising solution for dispatchable electricity from the sun. Current state-of-the-art CSP plants that use molten nitrate salts for TES do not allow energy storage at high enough temperatures to drive high-efficiency supercritical CO2 or ultra-critical steam cycles. On the other hand, oxide particles provide a TES media that can store energy at very high temperatures adequate for driving those cycles, but primary particle heat exchangers for coupling TES to those power cycles require expensive alloys that can be costly due to poor particle-wall heat transfer rates. Particle heat exchangers can benefit from fluidization of particle beds in narrow channels, but even with fluidization, the particle-wall heat transfer remains the limiting thermal resistance to overall heat transfer to the power cycle fluid. This study explores how the inclusion of fins in the narrow-channel particle beds can further improve particle-wall heat transfer coefficients while reducing axial dispersion that otherwise suppresses the log mean temperature difference across the heat exchanger. The improvements in effective particle-wall heat transfer coefficients and reduction in axial dispersion offer the potential to reduce the size and cost of the particle heat exchanger and the overall TES subsystem. This study assesses under what fluidizing bed conditions that fins can be most effective at enhancing particle-wall heat transfer in narrow-channel fluidized beds.
-
Potential PFAS products from thermal decomposition of munitions: a DFT analysis of pyrolysis products of fluoropolymers in the presence of energetic compoundsPer- and polyfluoralkyl substances (PFAS) are one of the most recalcitrant anthropogenic chemicals that can bioaccumulate and have carcinogenic properties. Their ubiquity in environmental matrices and resistance toward traditional disposal techniques pose a serious threat to the environment and public health. This thesis serves as a preliminary analysis of a novel source of PFAS, the disposal of munitions. Most munitions contain energetic compounds, to serve as the source of energy in detonation events, and fluoropolymers, to bind formulations as a means of stabilization, to seal the formulation away from the elements, or to serve as a fluorine source in pyrolants. Energetic compounds degrade through a variety of pathways, but the major pathways involve the formation of radicals and extreme heat. Despite the extreme recalcitrance of fluoropolymers, the conditions of detonation events are more than sufficient to initiate thermal decomposition in even the most robust fluoropolymers. The thermal decomposition of fluoropolymers involves highly reactive radical intermediates too. Since both energetic compounds and fluoropolymers undergo radical degradation mechanisms, PFAS radical recombination products are expected to be formed. There is little known about such potential PFAS products since they contain functionalities not seen in other sources of PFAS, so a summary of the synthesis and degradation reactions that these species are known to undergo from synthesis work is provided. This work also contains a detailed computational investigation of chemical bonding in such potential PFAS products from the pyrolysis of fluoropolymers in the presence of energetic compounds. Such investigations provide insight to the mechanisms of PFAS production and the types of functional groups to expect which provide guidance to the future analytical work by narrowing the scope of the system. The more that computational work is able to mimic experimental results, the more efficient future laboratory work will be. Thus, this thesis concludes by proposing possible future directions for this work, which includes a wide variety of ways to expand the system to be a more comprehensive representation of reality.
-
Interfacial engineering of dense metallic membranes for stable and cost-effective hydrogen purification in fusion relavent environmentsHydrogen is the most abundant element in the universe and plays a vital role in many natural and man-made processes. In particular, high purity hydrogen is of great importance for applications such as ammonia feedstock, semiconductor processing, and polymer electrolyte membrane fuel cells. Hydrogen isotopes play a critical role in fusion environments, where they serve as a fuel source for nuclear fusion reactions that release massive amounts of energy. This reaction is occurring at the center of our sun where hydrogen atoms are brought together under extreme pressure and temperature to form helium, releasing vast amounts of energy in the process. This process can be accomplished on earth by fusing isotopes of hydrogen: deuterium and tritium. A critical challenge facing fusion technology is the safe and effective management of tritium. Tritium is a radioactive beta-emitter with a short half-life (12.3 years), so it must be contained and regenerated on site. Dense metallic membranes are the preferred method of hydrogen purification in fusion environments as they are perfectly selective for hydrogen. Vanadium is a material of critical importance as it has a high hydrogen permeability and has low induced radioactivity under neutron irradiation occurring from fusion environments. Palladium based membranes are currently the gold standard for metallic membrane separation, however vanadium offers the benefit of significantly cheaper than palladium. In this work, composite V-based membranes are engineered to understand their long-term stability as hydrogen permeable membranes. First, hydrogen permeability in V was studied. It was found in order to achieve permeation, membranes needed extensive Ar sputter treatments to efficiently clean the surface of native oxides and must be operated at high temperature, however these membranes ultimately fail due to the inevitable formation of oxides on the surface originating from impurities in the V or gas streams. To achieve permeation at lower temperatures, thin Pd coatings (100 nm) are required to achieve maximum permeability. However, these membranes fail due to intermetallic diffusion between Pd and V and such alloys cannot permeate hydrogen effectively. The extent of Pd diffusion in V was quantified and correlated to membrane performance. Next, ultrathin (20-100 nm) materials were developed to be used as intermetallic diffusion barriers. The goal for this work was to enable high-throughput membranes by using Pd as a catalyst for hydrogen, a ceramic barrier to prevent intermetallic diffusion, and bulk V for its high permeability. Al2O3 was the first candidate studied due to its ease of fabrication utilizing atomic layer deposition. Ultimately, Al2O3 was not found to be effective due to VOx forming at the interface that obstructs hydrogen permeance. For this reason, nitrides were considered. First, Mo2N was studied as a potential catalyst and an intermetallic diffusion barrier. As a catalyst, it was found that Mo2N decays to Mo leading to permeability decline due to Mo being fully miscible in V. As an intermetallic diffusion barrier, Mo2N would delaminate proceeding heat treatments causing poor membrane performance. Finally, ZrN was developed as an intermetallic diffusion barrier. It was found that ZrN was effective at preventing intermetallic diffusion between Pd and V to a certain temperature. These composites membranes were found to be stable at temperatures up to 450°C, but increased temperatures lead to barrier breakdown and permeation decline. The effect of reactive sputtering conditions was studied and correlated with membrane performance with superior permeability observed from membranes fabricated with low electrical resistivity, suggesting that N vacancies and defects may play important roles. Additionally, these membranes improved with time. Stable performance (>200 h) at 425°C reached a permeability of 6 x 10-8 mol H2 m m-2 s-1 Pa-0.5, this result was 4X greater than that of Pd. Zr, having high affinity for oxygen, was shown to effectively getter oxygen from the bulk V that did not impede hydrogen permeability that could explain improved performance with time.
-
Submarine landslide processes, mechanics, and effects investigated through physical experiments, numerical models, and natural samplesRemobilization of sea floor sediments (submarine slope failures) presents a hazard to coastlines and coastal communities through its propensity to damage seafloor infrastructure and generate tsunamis. Submarine slope failures and their deposits (mass transport deposits) have been identified on active and passive continental margins, even on slopes < 2°. Despite the global presence and threat of slope failures, understanding of submarine slope failure mechanisms, including factors controlling initiation, evolution, and tsunamigenesis, is limited. Here, I use flume experiments, geotechnical data, and numerical models to investigate the mechanisms of submarine slope failure initiation and behavior. Benchtop flume experiments were conducted to improve understanding of slope response to overpressure using various combinations of quartz sand, cohesive clay (smectite), and non-cohesive quartz powder (clay-sized particles). Numerical models that are commonly used to evaluate natural slope failure (infinite slope factor of safety analyses) were tested against our controlled system. Comparison between experiment results and factor of safety predictions reveals discrepancies between the models and findings, indicating models may over-predict slope stability when over-pressured. Through these experiments, sediment cohesion was shown to dictate slope failure behavior, with brittleness of slope failure directly related to higher cohesion. Sediment permeability controlled the magnitude of overpressure required to induce slope failure when sediments were 25% clay. At higher clay concentrations, permeability did not affect the overpressure required to induce failure. Additionally, when clay contents were ≥ 25%, repeat failure events were observed in experiments and separate, intact sediment blocks were rafted from parent slopes. Additionally, overpressure was found incapable of producing tsunamigenic landslides but could precondition slopes for future tsunamigenic failure. Increased slope failure hazard was identified for clay-rich (≥ 25% wt.) slopes because of the potential for rafted block development and a potential for repeat failures in locations where failures had previously occurred. This work shows additional investigation of models used for hazard assessment of slope failure and tsunamigenesis is needed to assess differences between prediction and reality. This work also shows that local geology (clay content, surface and subsurface deformation) and hydrology (overpressure) need to be considered in hazard assessments to improve the accuracy of slope failure forecasts and preparations. Extending these findings to the natural environment, the geomechanical properties (shear strength, permeability, consolidation state, and overpressure) of submarine slope failure deposits and surrounding background sediments were characterized for a N-S transect of the Japan Trench. Eleven samples from 10 sites were collected by International Ocean Discovery Program Expedition 386: Japan Trench Paleoseismology using giant piston cores. Undrained direct simple shear and constant rate of strain experiments were performed on these samples. Spatial, lithological, and seismic history trends in sediment shear strengths, permeabilities, consolidation states, and overpressures were investigated No trends were found spatially, lithologically, or with seismic history, suggesting that local heterogeneity may be important for failure and seismic strengthening may not be significant. Feedbacks between peak shear strength, overpressure, shear-weakening, and over consolidation were identified indicating once sediment shearing/remobilization begins, continuation of sediment shearing should require progressively less shear strength. To improve regional forecasting and preparation efforts for future tsunami landfall, including identification of areas-at-risk of slope failure in the Japan Trench, a more robust understanding of Japan Trench sediment stability is required. Finally, physical experiments and natural observations were connected to coastal communities through their implications for risk assessments. The importance of local geology and hydrology in assessing failure likelihood and tsunamigenic potential were communicated in language common to the risk assessment community. Specifically, permeability, overpressure, cohesion, and regional seismic history in slopes in relation to coastal community risk assessment, preparations, and response were characterized. Overpressures were identified as capable of lowering slope stability, triggering non-tsunamigenic slope failures, and preconditioning a slope for tsunamigenic failure. Clay concentrations were identified as determining failure behavior, including tsunamigenic potential, and potential for repeated localized failures. Recommendations for modification of current assessment tools, including numerical models, were made that included these parameters. The importance of these parameters and their ramifications on margins not traditionally concerned with tsunamis was reinforced. This work demonstrated the importance of understanding how a slope might be preconditioned for failure and how inclusion of local geology and hydrology is necessary for a more holistic risk assessment of slope failure and tsunamigenesis.
-
Context-sensitive representations, reasoning, and communication for morally and socially competent robotsSocial robots must be able to interact naturally and fluidly with users who lack prior experience with robots. To address this challenge, it is essential to develop robots that are consistent with both the expectations of the interactants and the social conventions of the contexts in which interaction takes place. Moreover, language-capable robots hold unique persuasive power over their human interactants, which offers exciting opportunities to encourage pro-social behavior. However, these capabilities also come with risks, particularly in regard to the potential for robots to accidentally harm human norm systems. Thus, it is not only important to enable social robots with moral and social competence, but also to investigate the impact that these robots have on humans in order to facilitate the successful integration of robots into human society. This dissertation focuses on two overarching key research questions: (1) How can we leverage knowledge of both environmental and relational context to enable robots to understand and appropriately communicate about social and moral norms? (2) How do robots influence humans' moral and social norms through explicit and implicit design? We start by examining the impact of human-robot interaction designs on human behavior, with a special focus on how these designs can influence human compliance with social norms in interactions with robots as well as other humans. We then investigate how to structure human-robot interactions to better facilitate human-robot moral communication. Next, we present computational work on a role-sensitive relational-normative model of robot cognition, which consists of a role-based norm system, role-sensitive mechanisms for using the norm system to reason and make decisions, and the ability to communicate about the decisions on role-based grounds. We then present empirical evidence for how the different forms of explanation enabled by our system practically impact observers' trust, understanding confidence, and perceptions of robot intelligence. Then, we show how to leverage existing moral norm learning techniques to sociocultural linguistic context-sensitive norm learning. As part of this work, we demonstrate how norms of an appropriate level of context specificity can be automatically chosen based on the strength of evidence available. Finally, we present a simple mathematical model of proportionality that could explain how moral and social considerations should be balanced in multi-agent norm violation response generation, and use this model to start a discussion about the hidden complexity of modeling proportionality.
-
Assessing field-flow fractionation and light scattering for the characterization of extracellular vesicles and polymer colloidsNanoparticle characterization is centered around understanding how properties such as size and composition as well as count correlate with synthetic methodologies, observed behaviors, and end product performances. Current ensemble methods that examine these properties (light scattering, electron microscopy, nanoparticle tracking analysis (NTA), zeta potential, etc.) provide average values and cannot provide important information regarding distributions within the sample. These techniques are also compromised by sample polydispersity and may not be sensitive enough to examine particles that span the range of 1 nm to 1 µm in diameter. To overcome this, samples can be separated to create more monodisperse subpopulations, yet only a few ensemble methods have been readily coupled to separation techniques like field-flow fractionation (FFF). FFF is a family of analytical techniques that has been used to separate and characterize macromolecules and particles since the mid-1960s. Improvements to FFF instrumentation and theory along with the coupling to multiple detectors such as light scattering, differential refractive index, spectrophotometry, and mass spectrometry has enhanced FFF’s capabilities for particle characterization. More recent advancements include nanoparticle tracking analysis (NTA) and on-line Raman spectroscopy for determining the and number/size of nanoparticles as well as composition of polymeric particles, respectively. Together, they represent critical challenges and frontiers for nanoparticle analysis. The work in this thesis takes a different approach by first critically assessing multiangle light scattering (MALS) as a particle counting technique and then exploring the sensitivity of thermal field-flow fractionation (ThFFF) for compositional analyses. The former is particularly relevant as the FFF-MALS platform is now commonly used across disciplines and products. This, in combination with particle counting component of the European Union’s definition of a nanomaterial will undoubtedly lead to an increase in particle counting using MALS for particle counting. However, there has been no work published to date that critically assesses the impact of the uncertainty in nanoparticle refractive indices (a difficult to obtain value for core-shell type structures) and light scattering models used in data analysis to calculate the number of nanoparticles. This work seeks to address this gap in knowledge particularly for complex bioparticles such as outer membrane vesicles (OMVs). The thermal FFF work builds on previously published studies but differs in that the compositional sensitivity of this technique and the use of additives to improve retention and sample recovery are explored. Asymmetrical flow FFF (AF4), coupled to multiangle light scattering (MALS), has recently gained attention for the characterization of bacterial OMVs for nanomedicine and renewable energy applications. A major analytical challenge of OMVs is understanding how particle size and count impact their biogenesis and the cargo sorting proteins in different-sized vesicles. AF4-MALS can be used as an initial separation and enumeration step prior to further analyses with techniques tandem mass spectrometry (i.e., proteomics). While MALS has been used to count biological particles and has shown similar counts to offline methods like NTA, the influence of analyte-dependent parameters (e.g., refractive index (RI) and particle shape/model) with MALS has not been examined. Polystyrene latex standards (known RI and shape) and complex OMVs (unknown RI and shape) were chosen as the model and sample systems. Particle counts from PSL differ upwards of 13 % between sphere or Lorenz-Mie models while OMV particle counts can vary up to 200% depending on the model and refractive index used. Additionally, signal-to-noise in the light scattering signal intensity can lead to erroneous particle counts (i.e., > 1018 particles/mL), which was observed when using the coated sphere model for OMVs. While a promising enumeration technique, the need of particle count standards and accurate RI values impede the determination of absolute particle counts. Thermal FFF (ThFFF) is another technique under the FFF umbrella that can yield particle size as well as composition. The latter differentiates ThFFF from its better-known sibling, AF4. The driving force for ThFFF is imparted by a temperature gradient that is applied perpendicular to the separation axis. Analyte retention is dependent on the Soret coefficient (ST), a ratio of the analytes thermal diffusion coefficient (DT) to its translational diffusion coefficient (D). While ThFFF is mainly done in organic solvents, there is an interest in moving towards aqueous solvents (AqThFFF) in order to characterize aqueous based (biological or synthetic) materials. Two major challenges in performing AqThFFF experiments are the need to use additives (i.e., salts or surfactants) to improve analyte retention and understanding how these additives influence particle thermophoresis. Additionally, little work has been done to assess the compositional sensitivity of ThFFF. To investigate the impact of additives and compositional sensitivity, a model set of butylacrylate: methyl methacrylate: acrylic acid (BA:MMA:AA) particles with subtle differences in acidic comonomer (0-3 %) were examined. A key component of this work was examining the impact of commonly used additives such as tetrabutylammonium perchlorate (TBAP) and FL-70 detergent on analyte retention and recovery. TBAP governed retention of the colloids while the incorporation of FL-70 increased sample recovery. An incremental component of calculating DT values is utilizing accurate D values. Examining D values through DLS (online and offline), via AF4 theory, and transformations from radius of gyration (Rg) data show that flow rates as low as 0.3 mL/min during FFF separations can cause D values to be larger than anticipated. While these changes in D do not impact overall trends across latex samples, they change the overall magnitude of DT. AqThFFF can distinguish between 1 % of acidic comonomer between samples, based on significant differences in DT values and demonstrates a higher sensitivity than the ~9% previously reported in literature. Overall, the work presented in this thesis provides insight into the importance of refractive index on particle counting analyses by MALS as well as subtle nuances and considerations in the data analysis for MALS particle counting. Additives that enhance retention and sample recovery in AqThFFF provide a useful foundation for future advancements and applications. This thesis serves as a platform for future work with biological particles and insight into particle thermophoresis.
-
Advancing thermal field-flow fractionation for industrial polymer characterizationField-flow fractionation (FFF) is a family of techniques known for its open channel design and ability to separate polymers and colloids. The applied field in an FFF technique determines the physiochemical properties by which the separation occurs. A well-established theory relates retention time to a retention parameter that is defined by the interaction between the analyte and the applied field as well as the analyte’s diffusion coefficient (D). This imparts the ability for FFF techniques to be used as both a separation and characterization tool. In the case of flow FFF, the retention parameter is dependent solely on D which subsequently relates to hydrodynamic size. For thermal FFF (ThFFF), the retention parameter is dependent on the Soret coefficient (ST), a ratio of the thermal diffusion coefficient (DT) and D. Existing studies of the thermophoresis of polymers in dilute solutions have revealed trends that may be utilized to solve challenges in polymer analysis. Specifically, DT has been observed to be molar mass independent (above 103 to 104 Da), polymer-solvent dependent, and recently – dependent on architecture. The polymer-solvent dependence of DT has been leveraged to characterize the composition of di- and tri-block polymer systems. This polymer-solvent dependence, while useful in driving separations, has proven to be challenging when analyzing new polymer chemistries because of incomplete understanding of thermophoresis and the resulting trial-and-error approach to selecting a carrier liquid that imparts sufficient polymer retention. This thesis aims to increase the adoption of ThFFF by expanding the scope of polymer chemistries and architectures studied. The work presented here leveraged a leading polymer thermophoresis model to identify a suitable carrier liquid for characterizing the architecture of bottlebrush polymers. The ST values were calculated from measured ThFFF retention times and their relationships to the degree of polymerization of the brush backbone and sidechains were established. Information about bottlebrush architecture was then obtained using the recently introduced Soret contraction factor (g”), defined as the ratio of the Soret coefficient of a branched polymer (ST,br) to that of linear polymer (ST,lin) with the same molar mass. Linear analogs were not available for these polyacrylates-containing bottlebrush polymers and thus ST,lin values were approximated using models for thermal and translational diffusion. A plot of log g” versus the number of chain-ends of a bottlebrush showed the expected decrease in log g” when the number of chain ends increased from 120 to 400. Differences in log g” were also noticeable between 30% and 100% grafting densities. This work demonstrated the feasibility of estimating ST,lin and opens the door for architecture characterization in the absence of a linear polymer analog. The g” approach described above has been successfully utilized for model polymer systems derived from well controlled synthesis and orthogonal characterization. Ultimately, the question is whether g” can be used to characterize polymers in complex formulations of industrial importance. A polydimethylsiloxane (PDMS) containing formulation was targeted because of its relative ‘greenness’ when compared to petrochemical derived polymers. This PDMS system proved to be challenging on multiple fronts. First, DT calculations did not yield trends useful to solvent selection due to the observed double Hansen Solubility sphere. This yielded two distinct values for DT for each solvent system, both of which were inaccurate (>100% difference). Second, the industrial formulation contained a low (< 10%) amount of crosslinked PDMS amidst a large amount (> 90%) of PDMS diluent as well as gels and microgels. This low level of crosslinked PDMS and the broad size polydispersity required development of a sample preparation procedure. Different sample preparation methods were evaluated using ThFFF-MALS and the molar mass profiles indicated that centrifugation followed by filtration was the most suitable. Next, PDMS samples with different levels of crosslinking were analyzed and the log g” distribution for the more crosslinked sample was observed to extend to lower values (more contraction). This is as expected and showed that the g” approach can be used for a new polymer chemistry, PDMS, and a complex sample mixture. To date, there has been no additional verification of the Soret contraction approach. PDMS offered a unique opportunity to address this gap because this polymer can be depolymerized and GC-MS used to determine products indicative of branching. GC-MS results confirmed the degree of branching trend indicated by g” values and is an important step forward. The final project presented in this thesis explores the link between DT and the glass transition temperature (Tg). This work presents a comprehensive compilation of polymer thermal diffusion data and the first comparison of experimental DT and Tg. Across multiple solvent systems, a strong positive correlation was observed. To understand if there is a physical connection between these two properties, the results of a first principles Tg model were compared to experimental DT values. This work suggests that entropic forces may contribute to DT, a factor which was explicitly ignored in the derivation of the current leading predictive model. In addition, Tg may be an alternative metric for carrier liquid selection for ThFFF analyses. In summary, ThFFF has become an increasingly powerful tool for the characterization of complex polymer systems. This thesis presents the first reported estimation of ST for a linear polymer utilizing leading models of translational and thermal diffusion, which has expanded the accessibility of g” analysis beyond systems with available linear analogs. The scope of architecture studied by g” now includes bottlebrushes and crosslinked networks along with the previously reported branched systems. Analyses of crosslinked PDMS also presents the first application of g” to an industrial polymer system. These results were also the first to be verified by alternative means. In addition, a potential link between DT and Tg was observed that could lead to improvements in predictive models. This relationship may also serve as a simple metric for approximating the relative magnitude of DT for polymers.
-
Monitoring large-scale rock slopes for the detection of rockfalls using structure-from-motion photogrammetry at Debeque Canyon, ColoradoThis research investigates the frequent rockfall events in DeBeque Canyon along I-70. It uses the multi-epoch photogrammetric monitoring datasets collected by the Colorado Department of Transportation between 2014 and 2021. The study aims to assess the effectiveness of the direct geo-referencing approach in creating large-scale photogrammetric models without ground control points (GCPs). It also aims to develop a workflow for creating a regional-scale rockfall inventory and characterize the spatial variability of rockfall characteristics. Furthermore, the research seeks to evaluate the impact of pre-existing rockmass structures on rockfall frequencies, sizes, and shapes. Comparison of the developed photogrammetric point clouds created using a direct geo-referencing approach to lidar surveys revealed a good matching precision. The precision was as good as 0.059 m in terms of root-mean-squared (RMS) difference metric. For efficient handling of large-scale, multi-epoch models, the study implemented construction of photogrammetric models for only the first and last acquisition. The corresponding image datasets for intermediate acquisitions were manually reviewed. This approach enabled rapid identification of the temporal occurrence of each rockfall. Segmenting photogrammetric models into smaller segments minimized "bowl-effect" distortion and reduced processing time. The study revealed that rockfall activity vary along DeBeque Canyon corresponding to changes in lithologies, rockmass conditions, and the presence of oversteepened areas. Increased rockfall activity can be attributed to factors such as prevalence of weaker rockmasses, increased degree of fracturing, human interference, and presence of steeper slopes. The temporal rockfall rates increase in years with a higher number of days with snow thickness exceeding 1 inch. The study found that pre-existing rockmass structures influenced rockfall failure mechanisms, shapes, and scaling exponent of the power-law equation. The scaling exponents of the magnitude-cumulative-frequency (MCF) curves were found to be impacted mainly by variations in lithology and degree of fracturing. The expected range of block volumes obtained based on structural mapping was larger than the actual rockfall volumes. This discrepancy occurred due to model resolution limitations for structural mapping. It also resulted from the occurrence of smaller rockfalls due to intact rock failure between mapped joints and rockfalls not bounded by joint sets.
-
Evaluation of the Paris Agreement from a realist and liberalist perspectiveOne of the greatest questions in international relations is how states can work together towards common goals despite having competing interests. This issue is central to the international effort to mitigate climate change. To this date, 185 parties have ratified the Paris Agreement to limit the rise in global temperatures to 1.5 to 2 degrees Celsius. Although this agreement has so far received widespread international support, implementation has proven difficult.
-
Evaluating eribulin's ability to produce a cytokine immune response in lung cancerThis study aims to examine the ability of Eribulin, a non-taxane microtubule inhibitor, to induce a cytokine immune response similar to that of targeted TKI's in lung cancer cell lines, and furthermore to characterize the cytokine response to general growth arrest therapies compared to specific targeted protein kinase inhibitors such as TKI's.
-
Exploring fractional derivatives and trig functionsThe objective of this research is to become familiar with fractional discrete calculus to the extent that fractional derivatives of discrete trigonometric functions can be taken and understood.
-
Meet the editorsThe editors are Tyler Pritchard, Editor in Chief, Wyatt Hinkle, Graphic Design Editor, Taylor Self, Technical Sciences Article Editor, James Talbot, Social Sciences Article Editor, McKenna Larson, Content Editor, and Austin Monaghan, Language Editor.
-
Laboratory spotlight CFCCThe Colorado Fuel Cell Center (CFCC) is a laboratory on the Colorado School of Mines campus that specializes in the analysis and development of fuel cell systems.