Natural Resources Engineering
Permanent URI for this communityhttps://laurentian.scholaris.ca/handle/10219/2075
Browse
Browsing Natural Resources Engineering by Issue Date
Now showing 1 - 20 of 78
- Results Per Page
- Sort Options
Item Pulsed electron deposition and characterization of nanocrystalline diamond thin films(Laurentian University of Sudbury, 2013-10-07) Alshekhli, OmarDiamond is widely known for its extraordinary properties, such as high hardness, thermal conductivity, electron mobility, energy bandgap and durability making it a very attractive material for many applications. Synthetic diamonds retain most of the attractive properties of natural diamond. Among the types of synthetic diamonds, nanocrystalline diamond (NCD) is being developed for electrical, tribological, optical, and biomedical applications. In this research work, NCD films were grown by the pulsed electron beam ablation (PEBA) method at different process conditions such as accelerating voltage, pulse repetition rate, substrate material and temperature. PEBA is a relatively novel deposition technique, which has been developed to provide researchers with a new means of producing films of equal or better quality than more conventional methods such as Pulsed Laser Deposition, Sputtering, and Cathodic Vacuum Arc. The deposition process parameters have been defined by estimating the temperature and pressure of the plasma particles upon impact with the substrates, and comparing the data with the carbon phase diagram. Film thickness was measured by visible reflectance spectroscopy technique and was in the range of 40 – 230 nm. The nature of chemical bonding, namely, the ratio (sp3/sp3+sp2) and nanocrystallinity percentage were estimated using visible Raman spectroscopy technique. The films prepared from the ablation of a highly ordered pyrolytic graphite (HOPG) target on different substrates consisted mainly of nanocrystalline diamond material in association with a diamond-like carbon phase. The micro-structural properties and surface morphology of the films were studied by atomic force microscopy (AFM) and scanning electron microscopy (SEM). The mechanical properties of the NCD films were evaluated by nano-indentation.Item Numerical modeling of brittle rock failure around underground openings under statis and dynamic stress loadings(Laurentian University of Sudbury, 2013-10-09) Golchinfar, NaderStability of underground excavations is a prerequisite for the proper functioning of all other systems in a mining environment. From a safety point of view, the lives of people working underground rely on how well the support systems installed underground are performing. The ground control engineer cannot design an effective support system unless the area of the rock mass around the opening, prone to failure, is well identified in advance, even before the excavation of the tunnel. Under high stress conditions, usually experienced at deep mining levels, stress-induced rock failure is the most common type of instability around the underground openings. This thesis focuses firstly on the use of the finite difference numerical tool FLAC to simulate brittle rock failure under static in-situ stresses. Brittle failure of the rock mass around underground openings is a particular type of stress-induced failure, which can result in notch-shaped breakouts around the boundary of the tunnel. Generation of these breakout zones is a discontinuum process and approximating this process using FLAC, which is a continuum tool, requires careful consideration of the stress conditions and the stress related behavior of rock material. Based on plasticity theory, this thesis makes an effort to estimate the breakout formation using an elastic – brittle - plastic material model. Due to seismic challenges that deep mining operations are currently experiencing, rockbursting is a major hazard to the stability of underground structures. Therefore in this research, brittle failure of rock in the vicinity of the underground excavations is approximated also under dynamic loading conditions. The numerically modeled results of two different material models iv are compared with each other along with a previously developed empirical graph. This assessment, when further validated by field observations, may provide a different perspective for underground support design under burst-prone conditions.Item Application of seismic monitoring in caving mines(Laurentian University of Sudbury, 2013-10-10) Abolfazlzadeh, YousefComprehensive and reliable seismic analysis techniques can aid in achieving successful inference of rockmass behaviour in different stages of the caving process. This case study is based on field data from Telfer sublevel caving mine in Western Australia. A seismic monitoring database was collected during cave progression and breaking into an open pit 550 m above the first caving lift. Five seismic analyses were used for interpreting the seismic events. Interpretation of the seismic data identifies the main effects of the geological features on the rockmass behaviour and the cave evolution. Three spatial zones and four important time periods are defined through seismic data analysis. This thesis also investigates correlations between the seismic event rate, the rate of the seismogenic zone migration, mucking rate, Apparent Stress History, Cumulative Apparent Volume rate and cave behaviour, in order to determine failure mechanisms that control cave evolution at Telfer Gold mine.Item In SITU railway track fault detection using railcar vibration(Laurentian University of Sudbury, 2014-03-17) Pagnutti, Jeffrey L.This thesis investigates the development of an automated fault detection system developed for a novel lightweight railway material haulage system; in particular, the study aims to detect railway track faults at the incipient stage to determine the feasibility of maintenance decision support, ultimately with the function of preventing catastrophic failure. The proposed approach is an extension of the current state of the art in fault detection of unsteady machinery. The most common railway track faults associated with train derailment were considered; namely, horizontal and transverse crack propagation, mechanical looseness, and railbed washout were the faults of interest. A series of field experiments were conducted to build a database of vibration, speed, and localization data in healthy and faulted states. These data were used to develop, investigate, and validate the effectiveness of various approaches for fault detection. A variety of feature sets and classification approaches were investigated to determine the best overall configuration for the fault detector. The feature sets were used to condense data segments and extract characteristics that were sensitive to damage, but insensitive to healthy variations due to unsteady operation. The pattern recognition classifiers were used to categorize new data members as belonging to the healthy class or faulted class. The fault detection results from the proposed approach were promising. The feasibility of an automated online fault detection system for the lightweight material haulage system examined in this study was confirmed. The conclusions of this research outline the major potential for an iv effective fault detection system and address future work for the practical implementation of this system.Item Model based fault detection for two-dimensional systems(Laurentian University of Sudbury, 2014-05-05) Wang, ZhenhengFault detection and isolation (FDI) are essential in ensuring safe and reliable operations in industrial systems. Extensive research has been carried out on FDI for one dimensional (1-D) systems, where variables vary only with time. The existing FDI strategies are mainly focussed on 1-D systems and can generally be classified as model based and process history data based methods. In many industrial systems, the state variables change with space and time (e.g., sheet forming, fixed bed reactors, and furnaces). These systems are termed as distributed parameter systems (DPS) or two dimensional (2-D) systems. 2-D systems have been commonly represented by the Roesser Model and the F-M model. Fault detection and isolation for 2-D systems represent a great challenge in both theoretical development and applications and only limited research results are available. In this thesis, model based fault detection strategies for 2-D systems have been investigated based on the F-M and the Roesser models. A dead-beat observer based fault detection has been available for the F-M model. In this work, an observer based fault detection strategy is investigated for systems modelled by the Roesser model. Using the 2-D polynomial matrix technique, a dead-beat observer is developed and the state estimate from the observer is then input to a residual generator to monitor occurrence of faults. An enhanced realization technique is combined to achieve efficient fault detection with reduced computations. Simulation results indicate that the proposed method is effective in detecting faults for systems without disturbances as well as those affected by unknown disturbances.The dead-beat observer based fault detection has been shown to be effective for 2-D systems but strict conditions are required in order for an observer and a residual generator to exist. These strict conditions may not be satisfied for some systems. The effect of process noises are also not considered in the observer based fault detection approaches for 2-D systems. To overcome the disadvantages, 2-D Kalman filter based fault detection algorithms are proposed in the thesis. A recursive 2-D Kalman filter is applied to obtain state estimate minimizing the estimation error variances. Based on the state estimate from the Kalman filter, a residual is generated reflecting fault information. A model is formulated for the relation of the residual with faults over a moving evaluation window. Simulations are performed on two F-M models and results indicate that faults can be detected effectively and efficiently using the Kalman filter based fault detection. In the observer based and Kalman filter based fault detection approaches, the residual signals are used to determine whether a fault occurs. For systems with complicated fault information and/or noises, it is necessary to evaluate the residual signals using statistical techniques. Fault detection of 2-D systems is proposed with the residuals evaluated using dynamic principal component analysis (DPCA). Based on historical data, the reference residuals are first generated using either the observer or the Kalman filter based approach. Based on the residual time-lagged data matrices for the reference data, the principal components are calculated and the threshold value obtained. In online applications, the T2 value of the residual signals are compared with the threshold value to determine fault occurrence. Simulation results show that applying DPCA to evaluation of 2-D residuals is effective.Item Flexible floating thin film photovoltaic (PV) array concept for marine and lacustrine environments(Laurentian University of Sudbury, 2014-05-16) Trapani, KimThe focus of the research is on the development of the concept of floating flexible thin film arrays for renewable electricity generation, in marine and lacustrine application areas. This research was motivated by reliability issues from wave energy converters which are prone to large loads due to the environment which they are exposed in; a flexible system would not need to withstand these loads but simply yield to them. The solid state power take off is an advantage of photovoltaic (PV) technology which removes failure risks associated with mechanical machinery, and also potential environmental hazards such as hydraulic oil spillage. The novelty of this technology requires some development before it could even be considered feasible for large scale installation. Techno-economics are a big issue in electricity developments and need to be scoped in order to ensure that they would be cost-competitive in the market and with other technologies. Other more technical issues relate to the change in expected electrical yield due to the modulation of the PV array according to the waves and the electrical performance of the PVs when in wet conditions. Results from numerical modelling of the modulating arrays show that there is not expected variation in electrical yield at central latitudes (slightly positive), although at higher latitudes there could be considerable depreciation. With regards to the electrical performance a notable improvement was measured due to the cooling effect, slight decrease in performance was also estimated due to water absorption (of ~ 1.4%) within the panels. Overall results from both economic and technical analysis show the feasibility of the concept and that it is a possibility for future commercialisation.Item Towards modeling heat transfer using a lattice boltzmann method for porous media(Laurentian University of Sudbury, 2014-05-16) Banete, OlimpiaI present in this thesis a fluid flow and heat transfer model for porous media using the lattice Boltzmann method (LBM). A computer simulation of this process has been developed and it is written using MATLAB software. The simulation code is based on a two dimensional model, D2Q9. Three physical experiments were designed to prove the simulation model through comparision with numerical results. In the experiments, physical properties of the air flow and the porous media were used as input for the computer model. The study results are not conclusive but show that the LBM model may become a reliable tool for the simulation of natural convection heat transfer in porous media. Simulations leading to improved understanding of the processes of air flow and heat transfer in porous media may be important into improving the efficiency of methods of air heating or cooling by passing air through fragmented rock.Item A novel method of detecting galling and other forms of catastrophic adhesion in tribotests(Laurentian University of Sudbury, 2014-10-01) Dalton, Gregory MichaelTribotests are used to evaluate the performance of lubricants and surface treatments intended for use in industrial applications. They are invaluable tools for lubricant development since many lubricant parameters can be screened in the laboratory with only the best going on to production trials. Friction force or coefficient of friction is often used as an indicator of lubricant performance with sudden increases in friction coefficient indicating failure through catastrophic adhesion. Under some conditions the identification of the point of failure can be a subjective process. This raises the question: Are there better methods for identifying lubricant failure due to catastrophic adhesion that would be beneficial in the evaluation of lubricants? The hypothesis of this research states that a combination of data from various sensors measuring the real-time response of a tribotest provides better detection of adhesive wear than the coefficient of friction alone. In this investigation an industrial tribotester (the Twist Compression Test) was instrumented with a variety of sensors to record: vibrations along two axes, acoustic emissions, electrical resistance, as well as transmitted torsional force and normal force. The signals were collected at 10 kHz for the duration of the tests. In the main study D2 tool steel annular specimens were tested on coldrolled sheet steel at 100 MPa contact pressure in flat sliding at 0.01 m/s. The effects of lubricant viscosity and lubricant chemistry on the adhesive properties of the surface were examined. Tests results were analyzed to establish the apparent point of failure based on the traditional friction criteria. Extended tests of one condition were run to various points up to and after this point and the results analyzed to correlate sensor data with the test specimen surfaces. Sensor data features were used to identify adhesive wear as a continuous process. In particular an increase “friction amplitude” related to a form of stick-slip was used as a key indicator of the occurrence of galling. The findings of this research forms a knowledge base for the development of a decision support system (DSS) to identify lubricant failure based on industrial application requirements.Item The use of mechanical redundancy for fault detection in non-stationary machinery(Laurentian University of Sudbury, 2014-11-14) ElMaghraby, Mohamed H.The classical approach to machinery fault detection is one where a machinery’s condition is constantly compared to an established baseline with deviations indicating the occurrence of a fault. With the absence of a well-established baseline, fault detection for variable duty machinery requires the use of complex machine learning and signal processing tools. These tools require extensive data collection and expert knowledge which limits their use for industrial applications. The thesis at hand investigates the problem of fault detection for a specific class of variable duty machinery; parallel machines with simultaneously loaded subsystems. As an industrial case study, the parallel drive stations of a novel material haulage system have been instrumented to confirm the mechanical response similarity between simultaneously loaded machines. Using a table-top fault simulator, a preliminary statistical algorithm was then developed for fault detection in bearings under non-stationary operation. Unlike other state of the art fault detection techniques used in monitoring variable duty machinery, the proposed algorithm avoided the need for complex machine learning tools and required no previous training. The limitations of the initial experimental setup necessitated the development of a new machinery fault simulator to expand the investigation to include transmission systems. The design, manufacturing and setup of the various subsystems within the new simulator are covered in this manuscript including the mechanical, hydraulic and control subsystems. To ensure that the new simulator has successfully met its design objectives, extensive data collection and analysis has been completed and is presented in this thesis. The results confirmed that the developed machine truly represents the operation of a simultaneously loaded machine and as such would serve as a research tool for investigating the application of classical fault detection techniques to parallel machines in non-stationary operation.Item Predicting blast-induced damage for open pit mines using numerical modelling software and field observations(2014-12-02) Hall, Alexander K.Blasting is the most common method used to fragment rock in the mining industry. However, given the violent nature of explosives and the high variability of results that can occur from blast to blast, there is potential to cause significant damage to the final walls of an open pit, which can lead to slope stability problems, catch bench filling, long-term rock fall hazards and ramp closure. Blasts need to be designed to suit the characteristics of the rock to be broken. Characteristics of the existing rock mass such as natural jointing, joint orientation, joint condition, and the strength of the rock, all need to be accounted for prior to designing a blast. In general, blasting engineers rely on a combination of empirical analysis and rules of thumb for blast designs. The uncertainty involved with these techniques can lead to significant problems in open pit mining. At the bench scale of an open pit mine, the loss of the bench crest is a concern, however at the full pit scale, bench deterioration can jeopardize worker safety and lead to potential closure of the mine. The results of a blast can be highly variable – a blast design that yields favorable results on one side of a pit can have detrimental effects on another wall of the pit or at different elevations in the pit, based on the characteristics of the rock. It often takes multiple iterations of blast designs to achieve an optimal result, which is costly and time consuming for the company that operates the mine. The purpose of this thesis is to evaluate the effectiveness of a relatively new software package, Blo-Up, that combines both a finite difference continuum code and a distinct element code in order to model the entire blasting process from start to finish. The main focus of the research will be to examine blast induced damage sustained to final pit walls and provide techniques for minimizing damage. The specific areas of the study are: 1) To confirm the software is able to give results similar to those observed in the field; 2) To model pre-split designs in homogeneous rock; 3) To model pre-split designs in jointed rock masses; 4) To model the effect of a production hole detonation on inclined pre-split holes, as opposed to vertical pre-split holes, and 5) To model effects of large scale production blasts on final wall stability. For the purposes of this review, kimberlite rock was chosen to be the focus of the study due to its ductile characteristics, which makes controlled blasting difficult. The main findings of the research are as follows: 1) The software is able to replicate blast outcomes observed in the field; 2) The importance of tailoring the pre-split design to the rock mass is critical, and 3) The main production blast must be well balanced if the explosive energy is to be evenly distributed through the system.Item Cross-linked polymers of phenylacetylene and 1,3-diethynylbenzene: new polymer precursors for nanoporous carbon materials for supercapacitors and gas storage(Laurentian University of Sudbury, 2015-01-09) Grundy, MarkThe increasing threats of global warming, rapid depletion of fossil fuels, and increasing energy demands are driving an enormous amount of research into clean renewable sources of energy, flue gas capture technologies, and environmentally friendly energy storage devices, to name a few. Activated carbons present a multipurpose material commonly used in many of these increasingly popular green technologies. A wide range of cross-linked acetylenic polymers of phenylacetylene and 1,3- diethynylbenzene were synthesized and investigated in this thesis to generate materials for electrochemical double layer capacitors, CO2 capture, and hydrogen storage. Chemical activation of the copolymers in the presence of KOH was shown to produce highly microporous carbons with various textural properties. The specific cross-linking densities of the polymer precursors prior to carbonization were shown to greatly affect the carbon yield, surface area, pore volumes and pore sizes of the carbons produced. Electrochemical measurements of the activated carbons showed their impressive performances as capacitor materials, with high specific capacitances (up to 446 F g−1 at 0.5 A g−1 in 3-electrode cell) and long cycle life. Gas sorption studies also demonstrated impressive H2 and CO2 adsorption capacities (up to 2.66 wt% or 13.3 mmol g−1 for H2 adsorption at 77 K and 1 atm, and up to 30.6 wt% or 6.95 mmol g−1 for CO2 adsorption at 273 K and 1 atm). Owing to the high content of pendent alkyne groups in these polymers, complexation reactions with metallic carbonyl ligands are able to provide an effective iv way of dispersing metallic and metal oxide nanoparticles within the synthesized copolymers, which could provide additional pseudocapacitive properties. An appropriate copolymer with high alkyne content was subjected to complexation with Co2(CO)8, and subsequently carbonized and oxidized to yield carbon-supported CoxOy/Co nanoparticles (CoxOy@C-CPD76%). In addition to pseudocapacitive contributions, the cobalt species also effectively catalyzed the production of graphitic networks within the carbon support, improving their conductive properties. Electrochemical measurements demonstrated impressive specific capacitance (310 F g−1 at 0.1 A g−1) compared with non-activated carbons (160 – 177 F g−1 at 0.1 A g−1) synthesized at identical conditions, and provided a large stable potential window (1.4 V) in an aqueous KOH solution. The combined electrochemical double layer capacitance and pseudocapactiance behaviour of the carbon and CoxOy/Co also provided improved energy densities (21 W h kg−1), and uncompromised power densities (2017 W kg−1) compared with the pristine carbons (~2034 W kg−1).Item Numerical analysis of porous piezoelectric materials(Laurentian University of Sudbury, 2015-01-13) Singh, JaspreetThree-dimensional finite element models based on unit-cell approach are developed to characterize the complete electromechanical properties of: (i) zero-dimensional (3-0), one-dimensional (3-1) and three-dimensional (3-3) type porous piezoelectric structures made of lead zirconium titanate (PZT-7A) and relaxor (PMN-PT based) ferroelectrics (RL); and (ii) 3-3 type porous piezoelectric foam structures made of several classes of piezoelectric materials such as barium sodium niobate (BNN), barium titanate (BaTiO3) and relaxor (PMN-PT based) ferroelectrics (RL). In this thesis, finite element software named ABAQUS is used to characterize the electromechanical response of 3-0, 3-1 and 3-3 type porous piezoelectric structures. Appropriate boundary conditions are invoked for various porous piezoelectric structures (i.e. 3-0, 3-1 and 3-3 type) to ensure that the electromechanical deformation response of the unit-cell, under conditions of electrical and mechanical loading, is representative of the entire porous piezoelectric structures. Overall, this thesis demonstrates that the microstructural features such as porosity connectivity, porosity aspect ratio, porosity volume fraction, foam shape, and material selection play significant roles on the electromechanical properties and the figures of merit of porous piezoelectric structures.Item Modeling and fault detection of an industrial copper electrowinning process(2015-01-22) Wiebe, SusanCopper electrowinning plants are where high purity copper (Cu) product is obtained through electrochemical reduction of copper from the leaching solution. The presence selenium (Se) and tellurium (Te) in copper sulphide minerals may result in contamination of the leach solution and, eventually of the copper cathode. Unfortunately, hydrometallurgical processes are often difficult to monitor and control due to day-to-day fluctuations in the process as well as limitations in capturing the data at high frequencies. The purpose of this work is to model key variables in the copper electrowinning tank and to apply statistical fault detection to the selenium/tellurium removal and copper electrowinning process operations. First principle modeling was applied to the copper electrowinning tank and partial differential equation models were derived to describe the process dynamics. Industrial data were used to estimate the model parameters and validate the resulting models. Comparison with industrial model shows that the models fit reasonably well with industrial operation. Simulations of the models were run to explore the dynamics under varying operating conditions. The derived models provide a useful tool for future process modification and control development. Using the collected industrial operating data, dynamic principal component analysis (DPCA) based fault detection was applied to Se/Te removal and copper electrowinning processes at Vale’s Electrowinning Plant in Copper Cliff, ON. The fault detection results from the DPCA based approach were consistent with the industrial product quality test. After faults were detected, fault diagnosis was then applied to determine the causes of faults. The fault detection and diagnosis system helps define causes of upset conditions that lead to coppercathode contamination.Item Longterm schedule optimization of an underground mine under geotechnical and ventilation constraints using SOT(Laurentian University of Sudbury, 2015-01-26) Sharma, VijayLong-term mine scheduling is complex as well time and labour intensive. Yet in the mainstream of the mining industry, there is no computing program for schedule optimization and, in consequence, schedules are still created manually. The objective of this study was to compare a base case schedule generated with the Enhanced Production Scheduler (EPS®) and an optimized schedule generated with the Schedule Optimization Tool (SOT). The intent of having an optimized schedule is to improve the project value for underground mines. This study shows that SOT generates mine schedules that improve the Net Present Value (NPV) associated with orebody extraction. It does so by means of systematically and automatically exploring the options to vary the sequence and timing of mine activities, subject to constraints. First, a conventional scheduling method (EPS®) was adopted to identify a schedule of mining activities that satisfied basic sets of constraints, including physical adjacencies of mining activities and operational resource capacity. Additional constraint scenarios explored were geotechnical and ventilation, which negatively effect development rates. Next, the automated SOT procedure was applied to determine whether the schedules could be improved upon. It was demonstrated that SOT permitted the rapid re-assessment of project value when new constraint scenarios were applied. This study showed that the automated schedule optimization added value to the project every time it was applied. In addition, the reoptimizing and re-evaluating was quickly achieved. Therefore, the tool used in this research produced more optimized schedules than those produced using conventional scheduling methods.Item The use of packed sphere modelling for airflow and heat exchange analysis in broken or fragmented rock(Laurentian University of Sudbury, 2015-01-26) Schafrik, SidneyAirflow and heat exchange characterizations through large bodies of fragmented rock in mines, such as those at Creighton and Kidd Creek Mines, reveal them to be still fundamentally only empirical in nature. Analysis of the accepted methods for the design and understanding of geometric properties such as the heat exchange area or the length or shape of airflow passages known to affect heat transfer in, for example, heat exchanger design do not appear in the ‘design’ equations for those bodies of broken rock. This thesis couples a method of discontinuum porous media modelling (referred to as a packed sphere model, abbreviated PSM) with a computational fluid dynamics (CFD) code to develop a proxy methodology for analysis of porous media that incorporates variables of airflow, heat transfer, and geometry (including porosity and tortuosity). Material property values for equivalent continuum fluid dynamics models are established and are found to follow formulations for airflow branches used in mine ventilation network analysis. Laboratory experiments of airflow and heat transfer with the PSMs were compared to CFD results for the same models on a 1:1 scale, to verify the approach and CFD results. Three separate approaches were investigated for the scaling of the results of the PSMs for use in large scale (~1km3) CFD simulations of industrial situations. The result of the work presented in this thesis is a verified methodology for establishing CFD airflow and heat transfer parameters for large bodies of broken and fragmented rock from knowledge of the particle size distribution parameters or the body porosity. The application of the methodology is illustrated with reference to the so-called Natural Heat Exchange Area at Creighton Mine, Sudbury, Ontario.Item Application of GenRel for maintainability analysis of underground mining equipment: based on case studies of two hoist systems(Laurentian University of Sudbury, 2015-01-29) Xu, ChaoWith the increasing costs of extracting ores, mines are becoming more mechanized and automated. Mechanization and automation can make considerable contributions to mine productivity, but equipment failures and maintenance have an impact on the profit. Implementing maintenance at suitable time intervals can save money and improve the reliability and maintainability of mining equipment. This thesis focuses on maintainability prediction of mining machinery. For this purpose, a software tool, GenRel, was developed at the Laurentian University Mining Automation Laboratory (LUMAL). GenRel is based on the application of genetic algorithms (GAs) to simulate the failure/repair occurrences during the operational life of equipment. In GenRel it is assumed that failures of mining equipment caused by an array of factors follow the biological evolution theory. GenRel then simulates the failure occurrences during a time period of interest using genetic algorithms (GAs) coupled with a number of statistical techniques. This thesis will show the applicability and limitation of GenRel through case studies, especially in using discrete probability distribution function. One of the objectives of this thesis is to improve GenRel. A discrete probability distribution function named Poisson is added in the pool of available probabilities functions. After improving and enhancing GenRel, the author carries out two groups of case studies. The objectives of the case studies include an assessment of the applicability of GenRel using real-life data and an investigation of the relationship between data size and prediction results. Discrete and continuous distribution functions will be applied on the same input data. The data used in case studies is compiled from failure records of two hoist systems at different iv mine sites from the Sudbury area in Ontario, Canada. The first group of case studies involves maintainability analysis and predictions for a 3-month operating period and a six-month operating period of a hoist system. The second group of case studies investigates the applicability of GenRel as a maintainability analysis tool using historical failure/repair data from another mine hoist system in three different time periods, three months, six months and one year. Both groups apply two different distribution probability functions (discrete and continuous) to investigate the best fit of the applied data sets, and then make a comparative analysis. In each case study, a statistical test is carried out to examine the similarity between the predicted data set with the real-life data set in the same time period. In all case studies, no significant impact of the data size on the applicability of GenRel was observed. In continuous distribution fitting, GenRel demonstrated its capability of predicting future data with data size ranging from 166 to 762. In discrete probability fitting, the case studies indicated to a degree the applicability of GenRel for the hoist systems at Mine A and Mine B. In the discussion and conclusion sections, the author discloses the findings from the case studies and suggests future research direction.Item Application of neuroergonomics in the industrial design of mining equipment.(2015-06-26) Mach, Quoc HaoNeuroergonomics is an interdisciplinary field merging neuroscience and ergonomics to optimize performance. In order to design an optimal user interface, we must understand the cognitive processing involved. Traditional methodology incorporates self-assessment from the user. This dissertation examines the use of neurophysiological techniques in quantifying the cognitive processing involved in allocating cognitive resources. Attentional resources, cognitive processing, memory and visual scanning are examined to test the ecological validity of theoretical laboratory settings and how they translate to real life settings. By incorporating a non-invasive measurement technique, such as the quantitative electroencephalogram (QEEG), we are able to examine connectivity patterns in the brain during operation and discern whether or not a user has obtained expert status. Understanding the activation patterns during each phase of design will allow us to gauge whether our design has balanced the cognitive requirements of the user.Item Star-structured polyethylene nanoparticles via Pd-catalyzed living polymerization : synthesis, characterization, and catalytic applications.(Laurentian University of Sudbury, 2015-07-08) Landry, Eric D.The arm-first synthesis of large unimolecular star-structured polyethylene nanoparticles or SPE-NPs (MW > 1,000 kg/mol, PDI ≈ 1.1) joined by a cross-linked polynorbornadiene (PNBD) core is described in this thesis. SPE-NPs having high arm number (fn > 100) and tunable arm topologies (hyperbranched HBPE or linear-but-branched LBPE) are conveniently synthesized in a single reactor following four consecutive steps. In step 1, living ethylene polymerization is catalyzed by 0.1 mmol of Pd-diimine catalyst 1 to grow HBPE arms (1 atm C2H4/15 °C) or LBPE arms (27 atm C2H4/5 °C) of tunable lengths (tE = 1-5 h, Mn = 11-40 kg/mol). In step 2, the norbornadiene (NBD) cross-linker is added into the ethylene reactor for several hours (tNBD = 1-4 h) yielding PE-b-PNBD block copolymers with a short PNBD segment bearing cross-linkable pendant double bonds. SPEs are then formed in step 3 during precipitation in acidified methanol (H+/MeOH) and the final SPE-NPs are formed in step 4 after several hours of drying in vacuo at 120 °C. A thorough systematic investigation of the reaction parameters indicates that to produce increasingly larger SPE-NPs, it is essential to add a significant molar excess of NBD to 1 ([NBD]0/[1]0 > 50) and synthesize short LBPE arms but large HBPE arms. When synthesized with LBPE arms, the SPE-NPs have higher MW compared to those synthesized with HBPE arms due to the lower steric hindrance of the linear arms which enables a high number of arms to be joined at the PNBD core. Furthermore, the Pd-diimine catalyst used in the synthesis of the SPE-NPs was encapsulated within the cross-linked PNBD core. These encapsulated Pd(II) species were tested for their activity in hydrogenation reactions of terminal alkenes and alkynes (1-octene, 1-hexene, and 1-hexyne) and Heck coupling reactions of iodobenzene and n-butyl acrylate. Preliminary data suggests that these SPE-NPs may be used as models for the design of more advanced recyclable nanovessel for Pd(II) catalysts.Item Estimation of confined peak strength for highly interlocked jointed rockmasses(Laurentian University of Sudbury, 2015-07-20) Bahrani, NavidThe determination of rockmass strength for mining has become critically important in recent years due to the increase in the number of projects at depths exceeding 1500 m. The commonly used empirical approaches for the estimation of rockmass strength are primarily based on experiences at shallow depths (< 1500 m) and from observations of rockmass behaviours at low confinement (e.g., tunnel wall failure). Therefore, the application of these techniques for estimating the strength of rockmasses when highly interlocked and confined (e.g., pillar cores) is hypothesized to be flawed. The goal of this research is to develop reliable means of estimating the confined strength of highly interlocked jointed rockmasses. A two-dimensional code based on the Distinct Element Method (DEM) and its embedded Grainbased Model (GBM) is used to simulate the behaviour of a highly interlocked jointed rockmass to better understand its Strength Degradation (SD) from intact rock with increasing confinement. The GBM is first calibrated to the laboratory response of intact and granulated marble. The term "granulated" refers to a heat-treated marble where the cohesion at grain boundaries has been destroyed. The granulated marble represents an analogue for a highly interlocked jointed rockmass. The calibrated GBMs are then used to simulate micro-defected and defected rocks and jointed rockmasses. The results of triaxial test simulations on the calibrated synthetic rockmass specimens are used to develop two semi-empirical approaches. In the first approach, called the SD approach, equations that relate the strength degradation of a jointed rockmass from intact rock to the confinement are developed. The second approach is based on adjusting the strength parameters of the Hoek-Brown failure criterion to extend its applicability to highly interlocked jointed rockmassess. It is demonstrated that these two approaches can be used to estimate the confined strength of such rockmasses in a situation where the unconfined and confined strengths of the intact rock and the unconfined strength of the rockmass are known. The findings of this research provides the foundation for a better characterization of the strength for highly interlocked jointed rockmasses, and increases our understanding of the influence of confinement on rockmass strength.Item Object distance measurement using a single camera for robotic applications(Laurentian University of Sudbury, 2015-08-10) Alizadeh, PeymanVisual servoing is defined as controlling robots by extracting data obtained from the vision system, such as the distance of an object with respect to a reference frame, or the length and width of the object. There are three image-based object distance measurement techniques: i) using two cameras, i.e., stereovision; ii) using a single camera, i.e., monovision; and iii) time-of-flight camera. The stereovision method uses two cameras to find the object’s depth and is highly accurate. However, it is costly compared to the monovision technique due to the higher computational burden and the cost of two cameras (rather than one) and related accessories. In addition, in stereovision, a larger number of images of the object need to be processed in real-time, and by increasing the distance of the object from cameras, the measurement accuracy decreases. In the time-of-flight distance measurement technique, distance information is obtained by measuring the total time for the light to transmit to and reflect from the object. The shortcoming of this technique is that it is difficult to separate the incoming signal, since it depends on many parameters such as the intensity of the reflected light, the intensity of the background light, and the dynamic range of the sensor. However, for applications such as rescue robot or object manipulation by a robot in a home and office environment, the high accuracy distance measurement provided by stereovision is not required. Instead, the monovision approach is attractive for some applications due to: i) lower cost and lower computational burden; and ii) lower complexity due to the use of only one camera. Using a single camera for distance measurement, object detection and feature extraction (i.e., finding the length and width of an object) is not yet well researched and there are very few published works on the topic in the literature. Therefore, using this technique for real-world robotics applications requires more research and improvements. This thesis mainly focuses on the development of object distance measurement and feature extraction algorithms using a single fixed camera and a single camera with variable pitch angle based on image processing techniques. As a result, two different improved and modified object distance measurement algorithms were proposed for cases where a camera is fixed at a given angle in the vertical plane and when it is rotating in a vertical plane. In the proposed algorithms, as a first step, the object distance and dimension such as length and width were obtained using existing image processing techniques. Since the results were not accurate due to lens distortion, noise, variable light intensity and other uncertainties such as deviation of the position of the object from the optical axes of camera, in the second step, the distance and dimension of the object obtained from existing techniques were modified in the X- and Y-directions and for the orientation of the object about the Z-axis in the object plane by using experimental data and identification techniques such as the least square method. Extensive experimental results confirmed that the accuracy increased for measured distance from 9.4 mm to 2.95 mm, for length from 11.6 mm to 2.2 mm, and for width from 18.6 mm to 10.8 mm. In addition, the proposed algorithm is significantly improved with proposed corrections compared to existing methods. Furthermore, the improved distance measurement method is computationally efficient and can be used for real-time robotic application tasks such as pick and place and object manipulation in a home or office environment.