Distinguished Lecturer Program
The I&M Society Distinguished Lecturer Program (DLP) is one of the most exciting programs offered to our chapters, I&M members, and IEEE members. It provides I&M chapters around the world with talks by experts on topics of interest and importance to the I&M community. It, along with our conferences and publications, is the way we use to disseminate knowledge in the I&M field. Our lecturers are among the most qualified experts in their own field, and we offer our members a first-hand chance to interact with these experts during their lectures. The I&M Society aids chapters financially so that they might use this program.
All distinguished lecturers are outstanding in their fields of specialty. Collectively, the Distinguished Lecturers possess a broad range of expertise within the area of I&M. Thus, the chapters are encouraged to use this program as a means to make their local I&M community aware of the most recent scientific and technological trends and to enhance their member benefits. Although lectures are mainly organized to benefit existing members and Chapters, they can also be effective in generating membership and encouraging new chapter formation. Interested parties are encouraged to contact the I&M DLP Chair regarding this type of activity.
I&M AdCom (2012-2015); VP Membership; Membership Development Committee Chair; I&M Chapter Liaison; Distinguished Lecturer Program Chair; Faculty Course Development Award Chair; Graduate Fellowship Award Chair
Applied Microwave Nondestructive Testing Laboratory (amntl)
Electrical and Computer Engineering Department
Missouri University of Science & Technology (S&T)
Rolla, MO 64509, US
Millimeter-wave signals span the frequency range of 30 GHz to 300 GHz, corresponding to a wavelength range of 10 mm to 1 mm. Signals at these frequencies can easily penetrate inside dielectric materials and composites and interact with their inner structures. The relatively small wavelengths and wide bandwidths associated with these signals enable the production of high spatial-resolution images of materials and structures. Incorporating imaging techniques such as lens-focused and near-field techniques, synthetic aperture focusing, holographical methods, robust back-propagation algorithms with more advanced and unique millimeter wave imaging systems have brought upon a flurry of activities in this area and in particular for nondestructive evaluation (NDE) applications. These imaging systems and techniques have been successfully applied for a wide range of applications including:
• detection and evaluation of corrosion under paint,
• inspection of the space shuttle external fuel tank spray-on foam insulation (SOFI) and acreage heat tiles for interior flaw and corrosion detection and evaluation,
• inspection of layered composites such as radomes and control surfaces for interior flaws and moisture ingress, and
• detection and evaluation of disbond in carbon fiber-reinforced polymer (CFRP) retrofitted concrete bridge members.
Near-field techniques have been prominently used for these applications. However, undesired issues related with changing standoff have resulted in several innovative and automatic standoff distance variation removal techniques. Ultimately, imaging techniques must produce high-resolution (in 3D) images, become real-time, and use portable systems. To this end and to expedite the imaging process while providing a high-resolution images of a structure, recently the design and demonstration of a 6” by 6” one-shot, rapid and portable imaging system (Microwave Camera), consisting of 576 resonant slot elements, was completed. Currently, efforts are being expended to enable mono-static imaging and increasing its operating frequency into higher millimeter wave frequencies. This presentation provides an overview of these techniques, along with illustration of several typical examples where these imaging techniques have effectively provided viable solutions to many critical problems.
Abstract: The permittivity (dielectric properties) of a material is one of the factors that determine how the material interacts with an electromagnetic field. The knowledge of the dielectric properties of materials and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of permittivity has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. For the typical electrical engineer permittivity is a number that is needed to solve Maxwell’s equations. One of the purposes of this presentation is to give an explanation of why a material has a particular permittivity. The short answer is that a material has a particular permittivity because of its molecular structure. Another is how the permittivity can be related to other physical material properties.
The knowledge of permittivity has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the dielectric properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the dielectric properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Dielectric properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of a material results in a change in its permittivity. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the dielectric properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has a different permittivity from healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Techniques for measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
Abstract: The electromagnetic properties (permittivity and permeability) of a material determine how the material interacts with an electromagnetic field. The knowledge of these properties and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of the electromagnetic properties has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. Besides these traditional applications, the knowledge of the electromagnetic properties has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the electromagnetic properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the electromagnetic properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Electromagnetic properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of material results in a change in its electromagnetic properties. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the electromagnetic properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has different electromagnetic properties than healthy tissue.
Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Measurement techniques typically involve placing the material in an appropriate sample holder and determining the permittivity from measurements made on the sample holder. The sample holder can be a parallel plate or coaxial capacitor, a resonant cavity or a transmission line. These structures are used because the relationship between the electromagnetic properties and measurements are fundamental and well understood. One disadvantage of these types of sample holders is that many materials cannot be easily placed in them. Sample preparation is almost always required. This limits their use in real-time monitoring of processes. Another disadvantage is that several of these sample holders are usable only over a narrow frequency range. Extracting physical properties from electromagnetic property measurements often requires measurements made over a wide frequency range. Techniques for which this relationship, between electromagnetic properties and measurements, is not as straightforward have also been employed. One of these techniques is the open-ended coaxial-line probe. This technique has attracted much attention because of its applicability to nondestructive testing over a relatively broad frequency range. It can be used to measure a wide variety of materials including liquids, solids and semisolids. These attributes make it a very attractive technique for measuring biological, agriculture and food materials. In its simplest form, it consists of a coaxial cable without a connector attached to one end. This end is inserted into the material being measured. All of these measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
A Massively Multiuser Virtual Environment (MMVE) sets out to create an environment for thousands, tens of thousands, or even millions of users to simultaneously interact with each other as in the real world. For example, Massively Multiuser Online Games (MMOGs), now a very profitable sector of the industry and subject to academic and industry research, is a special case of MMVEs where hundreds of thousands of players simultaneously play games with each other. Although originally designed for gaming, MMOGs are now widely used for socializing, business, commerce, scientific experimentation, and many other practical purposes. One could say that MMOGs are the “killer app” that brings MMVE into the realm of eSociety. This is evident from the fact that Virtual currencies such as Linden (or L$) in Second Life are already being exchanged for real-world money. Similarly, virtual goods and virtual real-state are being bought and sold with real-world money. Massive numbers of users spend their time with their fellow players at online games like EverQuest, Half-Life, World of Warcraft, and Second Life. World of Warcraft, for example, has over twelve million users with a peak of over 500,000 players online at a given. There is no doubt that MMOGs and MMVEs have the potential to be the corner stone of any eSociety platform in the near future, because they bring the massiveness, awareness, and inter-personal interaction of the real society into the digital realm.
In this talk, we focus on approaches for supporting the massive number of users in such environments, consisting of scalability methods, zoning techniques, and area of interest management. Focus will be on the networking and system support and architectures, as well as research challenges still remaining to be solved.
Estimating latency between network nodes in the Internet can play a significant role in the improvement of the performance of many applications and services that use latency to make routing decisions. A popular example is peer to peer (P2P) networks, which need to build an overlay between peers in a manner that minimizes the delay of exchanging data among peers. Measurement of latency between peers is therefore a critical parameter that will directly affect the quality of applications such as video streaming, gaming, file sharing, content distribution, server farms, and massively multiuser virtual environments (MMVE) or massively multiplayer online games (MMOG). But acquisition of latency information requires a considerable amount of measurements to be performed at each node in order for that node to keep a record of its latency to all the other nodes. Moreover, the measured latency values are frequently subject to change and need to be regularly repeated in order to be updated against network dynamics. This has motivated the use of techniques that alleviate the need for a large number of empirical measurements and instead try to predict the entire network latency matrix using a small set of latency measurements. Coordinate‐based approaches are the most popular solutions to this problem. The basic idea behind coordinates based schemes is to model the latency between each pair of nodes as the virtual distance among those nodes in a virtual coordinate system.
In this talk, we will cover the basics of how to measure latency in a distributed manner and without the need for a bottleneck central server. We will start by an introduction and background to the field, then we will briefly explain measurement approaches such as Network Time Protocol, Global Positioning System, and the IEEE 1588 Standard, before moving to coordinate based measurement approaches such as GNP (Global Network Positioning), CAN (Content Addressable Network), Lighthouse, Practical Internet Coordinates (PIC), VIVALDI, and Pcoord. In the end, we also propose a new decentralized coordinatebased solution with higher accuracy, mathematically‐proven convergence, and locality‐aware design for lower delay.
The target audiences of this tutorial are practitioners, scientists, and engineers who work with networking systems and applications where there is a need to measure and estimate delay among network nodes, possibly a massive number of nodes (thousands, tens of thousands, or even hundreds of thousands nodes).
Electrical capacitance tomography (ECT) is an imaging technique for industrial applications. ECT is based on measuring capacitance from a multi-electrode capacitance sensor and reconstructing cross-sectional images, aiming to visualise the distribution of dielectric materials, such as gas/oil flows in an oil pipeline and gas/solids distribution in a fluidised bed. The internal information is valuable for understanding complicated phenomena, verifying computational fluid dynamic (CFD) models, measurement and control of industrial processes, which are difficult with conventional process instruments. Compared with other tomography modalities, ECT is the most mature and offers advantages of no radiation, rapid response, non-intrusive and non-invasive, withstanding high temperature and high pressure and low-cost.
Research into ECT involves sensor and electronic circuit design, data acquisition, computer interface, mathematics, finite element analysis, software programming, and general knowledge in process engineering. Because of extremely small capacitance to be measured (down to 0.0001 pF) and the nature of soft-field, ECT presents challenges in engineering and mathematics. The University of Manchester (formerly UMIST) pioneered research into ECT. The latest ACECT system presents the state-of-the-art technology, which can generate on-line images at 100 frames per second with 73 dB signal-to-noise ratio (SNR) and has been used for many challenging industrial applications, such as gas-oil-water flows in oil pipelines, wet gas separators, pneumatic conveyors, cyclone separators and fluidised bed dryers. It is foreseen that ECT will make major contributions to the gas/oil, pharmaceutical, power and other industries. In this Lecture, the principle of ECT, capacitance measuring circuits, image reconstruction algorithms and some applications will be discussed, together with a demonstration of an ACECT system.
Abstract – Rogowski coils have been used for a long time for monitoring or measurements of high, impulse, and transient currents. Rogowski coils are used for monitoring and control, protective relaying, power distribution switches, electric arc furnaces, electromagnetic launchers, core testing of large rotating electrical machines, partial-discharge measurements in high-voltage cables, power electronics, resistance welding in automotive industry, and plasma physics. Since their nonmagnetic cores do not saturate, they can operate over wide current ranges with inherent linearity. The applications entail low and high accuracy coils, measuring currents from a few amperes to tens of MA, at frequencies from a fraction of hertz to hundreds of MHz. The increased interest in Rogowski coils over the last decades has led to significant improvements in their design and performance. Their development has included innovative designs, new materials, machining techniques, and printed circuit board structures. This presentation will cover the principles of operation, design, calibration, standards, and applications of Rogowski coils.
Abstract – Metrology is in the very basis of acquiring scientific knowledge. In today’s interdependent world, ensuring uniform metrology inside and across national boundaries is a very important enabling factor of both national and international trade. In electric power systems, measurements of electrical and non-electrical quantities are necessary for their control, protection, and safe and reliable operation. Another very significant application of metrology is in electric energy trade, i.e. in revenue metering for both industrial and residential customers, but also between countries. The impact of distributed power generation, renewable energy resources, and deregulation of electrical power utilities introduced in many countries will be discussed. An attempt will be made to address the question what Smart Grids really are, and how they relate to smart metering, synchrophasor measurements, energy storage, and other power system technologies. The role of National Measurements Institutes will be highlighted. New instrumentation and measurement methods for both highest-accuracy and industrial applications for AC electrical power and energy, including high-voltage and high-current calibrations and applications, will be addressed.
Nowadays, scientists, researchers, and practical engineers face a previously unseen explosion of the richness and the complexity of problems to be solved. Besides the spatial and temporal complexity, common tasks usually involve non-negligible uncertainty or even lack of information, strict requirements concerning the timing, continuity, robustness, and reliability of outputs, and further expectations like adaptivity and capability of handling atypical and crisis situations efficiently.
Model based computing plays important role in achieving these goals, because it means the integration of the available knowledge about the problem at hand into the procedure to be executed in a proper form, acting as an active component during the operation. Unfortunately classical modeling methods often fail to meet the requirements of robustness, flexibility, adaptivity, learning, and generalizing abilities. Even soft computing based models may fail to be effective enough because of their high (exponentially increasing) complexity. To satisfy the time, resource and data constraints associated with a given task, hybrid methods and new approaches are needed for the modeling, evaluation, and interpretation of the problems and results. A possible solution is offered to the above challenges by the combination of soft computing techniques with novel approaches of anytime and situational modeling and operation.
Anytime processing is very flexible with respect to the available input information, computational power, and time. It is able to generalize previous input information and to provide short response time if the required reaction time is significantly shortened due to failures or an alarm appearing in the modeled system; or if one has to make decisions before sufficient information arrives or the processing can be completed. The aim of the technique is to ensure continuous operation in case of (dynamically) changing circumstances and to provide optimal overall performance for the whole system. In case of a temporal shortage of computational power and/or loss of some data, the actual operation is continued maintaining the overall performance “at lower price”, i.e., information processing based on algorithms and/or models of simpler complexity provide outputs of acceptable quality to continue the operation of the complete system. The accuracy of the processing may become temporarily lower but it is possibly still enough to produce data for qualitative evaluations and supporting further decisions.
Situational modeling has been designed for the modeling and control of complex systems where the traditional cybernetics models haven’t proved to be sufficient because the characterization of the system is incomplete or ambiguous due to unique, dynamically changing, and unforeseen situations. Typical cases are the alarm situations, structural failures, starting and stopping of plants, etc. The goal of situational modeling is to handle the contradiction arising from the existence of a large number of situations and the limited number of processing strategies, by grouping the possible situations into a treatable (finite) number of model classes of operational situations and by assigning certain processing algorithms to the defined processing regimes. This technique - similarly to anytime processing - offers a tradeoff between resource (including time and data) consumption and output quality.
The presentation gives an overview of the basics of anytime and situational approaches. Besides summarizing theoretical results and pointing out the arising open questions (e.g. accuracy measures, data interpretation, transients), the author enlightens some possibilities offered by these new techniques by showing successful applications taken from the fields of signal and image processing, control and fault diagnosis of plants, analysis and expert systems. Some of the discussed topics are:
- Anytime Fuzzy Fast Fourier Transformation and Adaptive Anytime Fuzzy Fast Fourier Transformation: How can we determine the most important signal parameters before the signal period arrives? How can we implement fast algorithms with only negligible delay?
- Anytime Recursive Overcomplete Signal Representations: How can we minimize the channel capacity necessary for transmitting certain amount of information? How can we provide optimal and flexible on-going signal representations, on-going signal segmentations into stationary intervals, and on-going feature extractions for immediate utilization in data transmission, communication, diagnostics, or other applications if the transmission channel is overloaded and in the case of processing non-stationary signals when complete signal representations can be used only with serious limitations?
- High Dynamic Range (HDR) imaging and situational image quality improvement: How can we make the invisible details of images visible? How can we enhance the useful information of images which is significant from the point of view of further processing?
- Anytime control and fault diagnosis of plants: How can we produce useful results and react in crisis situations very quickly in order to avoid catastrophes? How can we increase the safely available reaction time of the (slow) human supervisor by significantly decreasing the time needed for the automatic detection and diagnosis of faults?
- CASY, an Intelligent Car Crash Analysis System: How can we build an intelligent expert system, capable to reconstruct the 3D model of crashed cars autonomously (without any human interaction) using only 2D photos; based on it, how can it determine characteristic features of crashes like the energy absorbed by the car-body deformation, the direction of impact and the pre-crash speed of the car? In what other fields can the algorithms of system be used?
In this talk, modeling in imaging measurements is proposed as a way to facilitate the interpretation of phenomena based on imagery, or to make inferences based on models of such phenomena. In order to illustrate this presentation, several applications of imaging measurements and modeling are discussed, focusing in areas such as medicine, biometrics, pulp and paper, soil sciences, porous media, surveillance, and human-machine interfaces.
When modeling imaging measurements, usually we are trying to describe the world (or a real world phenomenon) using one or more images, and reconstruct some of its properties based on imagery data (like shape, texture or color). Actually, this is an ill-posed problem that humans can learn to solve effortlessly, but computer algorithms often are prone to errors. Nevertheless, in some cases computers can surpass humans and interpret imagery more accurately, given the proper choice of models, as we will show in this talk.
Reconstructing interesting properties of real world objects or phenomena from captured imagery data involves solving an inverse problem, in which we seek to recover some unknowns given insufficient information to specify a unique solution. Therefore, we disambiguate between possible solutions relying on models based on physics, mathematics or statistics. Modeling the real world in all its complexity still is an open problem. However, if we know ahead of time about the phenomenon or object of interest, we can construct detailed models using specialized techniques and domain specific representations, that are efficient at describing reliably the measurements. In this talk, we give a brief overview of challenging domain specific modeling problems, and use them as illustrations of the concepts involved in modeling imaging measurements (in the form of a tutorial). We discuss modeling issues in 2D and 3D stochastic textures (e.g. pulp and paper, soil science and agriculture). Also, we provide some insights on model selection and model-based prediction using examples in medicine (e.g. modeling tumor shape and size, and make inferences about its future growth or shrinkage) and biometrics (e.g. measurements of the pose of a human head).
Modeling imaging measurements is challenging, especially when dealing with textures. Texture is a widespread phenomenon in several segments of the industry and science (e.g. pulp and paper, soil science, oil reservoirs evaluation, agriculture, etc.). Nevertheless, texture is easy to perceive visually but difficult to describe. Typically, texture is a phenomenon that depends on the scale it is perceived, and its pattern depends on what we are looking for in a texture. Texture may consist of organized patterns of regular sub-elements, but in some cases it may be stochastic (e.g. paper, non-tissue materials, soil, etc.). For example, an image of a paper sample may be seen as a stochastic texture which does not have an identifiable texture element, or it may be seen as a collection of fibers forming a fiber network which, in this case, has fibers as identifiable texture elements. An important question to be answered is how should we describe/interpret a given texture ? This is an open question, and prior knowledge of the application and its textural properties helps identifying effective models for texture interpretation (and classification). Particularly, the structural characterization and classification of stochastic textures of porous materials has attracted the attention of researchers in different application areas, because of its great economic importance. For example, problems related to mass transfer and retention of solids in multi-phase fluid flow through stochastic porous materials are ubiquitous in different areas of chemical engineering. Agricultural engineering is one of the sectors that has received attention recently, mostly because of the changing practices in agriculture in developing countries, and in developed countries, with great environmental impact, and mass transfer in porous media like soils depends strongly on the morphological aspects of the media - such as the shape and size of voids, and depends also on the topological attributes of these media, such as the network connectivity. More recently, researchers have proposed geometrical and statistical approaches for porous media characterization. The statistical characterization and classification of stochastic porous media, is essential for the simulation and/or prediction of the mass transfer properties of a particular stochastic medium. Much work has been done on the characterization of stochastic porous media but the discrimination between different media types from measurements still remains a challenging issue. In this talk, we discuss the application of gamma statistics to model the distribution of voids in stochastic porous media, which has admitted a direct statistical geometric representation of stochastic fibrous networks and their fluid transfer properties. A related issue is the classification of stochastic textures and porous media, which we discuss by introducing a gamma manifold and embedding of stochastic texture and porous media representations. In order to measure the similarity of such stochastic textures and porous media, different approaches to measure stochastic texture similarity are overviewed. Experimental results based on porous media data obtained from tomographic images of soil, and images of fibrous materials are presented to illustrate this presentation.
Modeling imaging measurements often involves errors, and estimating the expected error of a model can be important in some applications (e.g. when estimating a tumor size and its potential growth, or shrinkage, in response to treatment). This issue is closely related to machine learning and pattern recognition, and techniques of these areas can be adapted to resolve problems in imaging measurements. Typically, a model has tuning parameters, and these tuning parameters may change the model complexity. We wish to minimize modeling errors and the model complexity, in other words, to get the ‘big picture’ we often sacrifice some of the small details. For example, estimating tumor growth (or shrinkage) in response to treatment requires modeling the tumor shape and size, which can be challenging for real tumors, and simplified models may be justifiable if the predictions obtained are informative (e.g. to evaluate the treatment effectiveness). To conclude this talk, open problems in imaging measurements model selection and assessment are discussed in some detail, particularly in biometrics and medicine.
Robots have changed the way we work, play, live and unfortunately fight wars. Robots invaded the workplace many decades ago, initially for factory automation. They are increasing their presence in the home at a very rapid pace, primarily for assisted living. Wars are being fought using robots on the ground, above and below the waters and in the air. In the next decade, the world will witness the largest growth of robots in the service industry. From the days of industrial automation using monstrous robots, the world has advanced to micro and nano robots traversing the veins of a human body to deliver drugs.
What makes the robots so capable and versatile as they are today? Will they ever be able to attain the full functionality, intelligence and versatility of human beings? Or is it a wishful thinking? What will be the breakthrough technology that will enable the robots to make that quantum jump in their capabilities?
For successful completion of tasks, robots have to perceive the world around them, the workspace in which they operate. At the heart of this perception are the inputs from a gamut of sensors. Accurate measurement of physical parameters and fusion of sensory data has a profound influence on the accuracy of the perception model. While a lot of energy and resources are still being expended for research into robot locomotion and actuators for motion, it is the advancement in sensors and measurement technology that will catapult the robots to the next level of versatility and acceptance. Miniaturisation of sensors and precision measurement will be the flavour of research in the next decade which will make a career in instrumentation and measurement a very attractive proposition for young scientists and researchers.
This presentation will -
• highlight the importance of sensing and measurement in the world of robotics
• give an overview of the various sensors and sensing technologies that are in vogue in robotics
• discuss future direction of research and development of sensors for robotics – MEMS, biological sensors etc.
• Illustrate case studies of advanced sensing and instrumentation in autonomous robots such as a swarm of super intelligent Nano Quadrocopters, a robot to inspect plant health and growth in a laboratory, and a manually operated robot to move hospital beds.
This presentation will be informative for industry and academicians and enthuse engineers and students to take up a career in sensors and instrumentation.
Despite the growing deployment of other energy sources, coal and biomass use is increasing worldwide to meet the rising global demand for electricity, which is predicted to rise by 2.6% per annum in the next 20 years. Global fluctuations in coal price and logistic uncertainties in coal supply mean that many power stations are burning a diverse range of coals (indigenous and imported) and the type and quality of coal being fired at any moment is often unknown for various practical reasons. Although biomass can be used to generate energy in different ways, co-firing with coal at existing power stations remains a practical option available to power plant operators, and is widely adopted as one of the main technologies for reducing greenhouse gas emissions from power generation. Biomass originates from a diverse range of sources in a wide variety of forms. In general, biomass has a higher moisture content and higher volatile matter than coal, but its density and calorific value are lower than coal. The inherent differences in combustion properties between biomass and coal and the unknown changes in the type and quality of coals and fluctuations in electricity demand have posed significant challenges to the power generation industry. Measurement and monitoring techniques have an important part to play in tackling the challenges.
This presentation reviews the recent advances in the development and applications of measurement and monitoring techniques to optimize the operation of coal and biomass fired power plants. The techniques that are covered in this presentation include pulverized fuel flow metering, on-line particle sizing, flame imaging, flame stability monitoring, and on-line fuel tracking. Fundamental principles of the measurement and monitoring techniques along with the design and implementation of prototype sensors and instruments will be introduced. Results from recent practical evaluations on industrial-scale combustion test facilities and demonstration trials on full-scale power plants will be reported.