Distinguished Lecturer Program The I&M Society Distinguished Lecturer Program (DLP) is one of the most exciting programs offered to our chapters, I&M members, and IEEE members. It provides I&M chapters around the world with talks by experts on topics of interest and importance to the I&M community. It, along with our conferences and publications, is the way we use to disseminate knowledge in the I&M field. Our lecturers are among the most qualified experts in their own field, and we offer our members a first-hand chance to interact with these experts during their lectures. The I&M Society aids chapters financially so that they might use this program. All Distinguished Lecturers are outstanding in their fields of specialty. Collectively, the Distinguished Lecturers possess a broad range of expertise within the area of I&M. Thus, Chapters are encouraged to use this program as a means to make their local I&M community aware of the most recent scientific and technological trends and to enhance their member benefits. Although lectures are mainly organized to benefit existing members and Chapters, they can also be effective in generating membership and encouraging new Chapter formation. Interested parties are encouraged to contact the I&M DLP Chair regarding this type of activity. DLP Chair Kristen Donnell Missouri University of Science & Technology United States Email 2021 Call for DL Applications The I&M Society is currently accepting applications for new Distinguished Lecturers for the Distinguished Lecturer Program. The deadline to apply is April 30, 2021. More Details Here Looking for a DL topic not covered by our current DL’s? Suggest a topic or find a DL who may be able to adapt his or her topic for your event by reaching out to the DLP Chair. Contact the DLP Chair Virtual Distinguished Lecturer Webinar Series COVID-19 has required all of us to adapt personally and professionally, and the I&M Society is no exception. To this end, in order to remain connected to our I&M colleagues and friends, the I&M Society hosted two Virtual Distinguished Lecturer Webinar series. Recordings are available at the link below! View Virtual DL Webinars Current Distinguished Lecturers Eros Pasero Distinguished Lecturer 2021 - 2024 Talk(s) Medicine 4.0: AI and IOT, the new revolution Medicine 4.0: AI and IOT, the new revolution × Industry 4.0 is considered the great revolution of the past few years. New technologies, the Internet of things, the possibility to monitor everything from everywhere changed both plants and the approaches to the industrial production. Medicine is considered a slowly changing discipline. The human body model is a difficult concept to develop. But we can identify some passages in which medicine can be compared to industry. Four major changes revolutionized medicine: Medicine 1.0: James Watson and Francis Crick described the structure of DNA. This was the beginning of research in the field of molecular and cellular biology Medicine 2.0: Sequencing the Human genome. This discovery made it possible to find the origin of the diseases. Medicine 3.0: The convergence of biology and engineering. Now the biologist’s experience can be combined with the technology of the engineers. New approaches to new forms of analysis can be used. Medicine 4.0: Digitalization of Medicine: IOT devices and techniques, AI to perform analyses, Machine Learning for diagnoses, Brain Computer Interface, Smart wearable sensors. Medicine 4.0 is definitely a great revolution in the patient care. New horizons are possible today. Covid 19 has highlighted problems that have existed for a long time. Relocation of services, which means remote monitoring, remote diagnoses without direct contact between the doctor and the patient. Hospitals are freed from routine tests that could be performed by patients at home and reported by doctors on the internet. Potential dangerous conditions can be prevented. During the Covid emergency everybody can check his condition and ask for a medical visit (swab) only when really necessary. This is true telemedicine. This is not a whatsapp where an elder tries to chat with a doctor. This is a smart device able to measure objective vital parameters and send to a health care center. Of course Medicine 4.0 requires new technologies for smart sensors. These devices need to be very easy to use, fast, reliable and low cost. They must be accepted by both people and doctors. In this talk we’ll see together the meaning of telemedicine and E-Health. E-health is the key to allowing people to self monitor their vital signals. Some devices already exist but a new approach will allow to everybody (especially older people with cognitive difficulties) to use these systems with a friendly approach. Telemedicine will be the new approach to the concept of hospital. A virtual hospital, without any physical contact but with an objective measurement of every parameter. A final remote discussion between the doctor and the patient is still required to feel comfortable. But the doctor will have all the vital signal recorded to allow him to make a diagnosis based on reliable data. Another important aspect of medicine 4.0 is the possibility of using AI both to perform parameter measurement and to manage the monitoring of multiple patients. The new image processing based on Artificial Neural Networks allows doctors to have a better and faster analysis. But AI algorithms are also able to manage intensive care rooms with several patients reducing the number of doctors involved in the global monitoring of the situation. Close Daniel Watzenig Distinguished Lecturer 2021 - 2024 Talk(s) Introduction to Autonomous Vehicles Introduction to Autonomous Vehicles × • A basic introduction to the sense-plan-act challenges of autonomous vehicles • Introduction to the most common state-of-the-art sensors used in autonomous driving (radar, camera, lidar, GPS, odometry, vehicle-2-x) in terms of benefits and disadvantages along with mathematical models of these sensors Autonomous driving is seen as one of the pivotal technologies that considerably will shape our society and will influence future transportation modes and quality of life, altering the face of mobility as we experience it by today. Many benefits are expected ranging from reduced accidents, optimized traffic, improved comfort, social inclusion, lower emissions, and better road utilization due to efficient integration of private and public transport. Autonomous driving is a highly complex sensing and control problem. State-of-the-art vehicles include many different compositions of sensors including radar, cameras, and lidar. Each sensor provides specific information about the environment at varying levels and has an inherent uncertainty and accuracy measure. Sensors are the key to the perception of the outside world in an autonomous driving system and whose cooperation performance directly determines the safety of such vehicles. The ability of one isolated sensor to provide accurate reliable data of its environment is extremely limited as the environment is usually not very well defined. Beyond the sensors needed for perception, the control system needs some basic measure of its position in space and its surrounding reality. Real-time capable sensor processing techniques used to integrate this information have to manage the propagation of their inaccuracies, fuse information to reduce the uncertainties and, ultimately, offer levels of confidence in the produced representations that can be then used for safe navigation decisions and actions. Close Multi-Sensor Perception and Data Fusion Multi-Sensor Perception and Data Fusion × • Overview of different sensor data fusion taxonomies as well as different ways to model the environment (dynamic object tracking vs. occupancy grid) in the Bayesian framework including uncertainty quantification • Exploiting potential problems of sensor data fusion, e.g. data association, outlier treatment, anomalies, bias, correlation, or out-of-sequence measurements • Propagation of uncertainties from object recognition to decision making based on selected examples, e.g. the real-time vehicle pose estimation based on uncertain measurements of different sources (GPS, odometry, lidar) including the discussion of fault detection and localization (sensor drift, breakdown, outliers etc.) Sensor fusion overcomes the drawbacks of current sensor technology by combining information from many independent sources of limited accuracy and reliability. This makes the system less vulnerable to random and systematic failures of a single component. Multi-source information fusion avoids the perceptual limitations and uncertainties of a single sensor and forms a more comprehensive perception and recognition of the environment including static and dynamic objects. Through sensor fusion we combine readings from different sensors, remove inconsistencies and combine the information into one coherent structure. This kind of processing is a fundamental feature of all animal and human navigation, where multiple information sources such as vision, hearing and balance are combined to determine position and plan a path to a destination. In addition, several readings from the same sensor are combined, making the system less sensitive to noise and anomalous observations. In general, multi-sensor data fusion can achieve an increased classification accuracy of objects, improved state estimation accuracy, improved robustness for instance in adverse weather conditions, an increased availability, and an enlarged field of view. Emerging applications such as autonomous driving systems that are in direct contact and interact with the real world, require reliable and accurate information about their environment in real-time. Close Yong Yan Distinguished Lecturer 2021 - 2024 Talk(s) Measurement and monitoring techniques through electrostatic sensing Measurement and monitoring techniques through electrostatic sensing × Over the past three decades a wide range of electrostatic sensors have been developed and utilized for the continuous monitoring and measurement of various industrial processes. Electrostatic sensors enjoy simplicity in structure, cost-effectiveness and suitability for a variety of process conditions. They either provide unique solutions to some measurement challenges or offer more cost-effective or complementary options to established sensors such as those based on acoustic, capacitive, electromagnetic or optical principles. The established or potential applications of electrostatic sensors appear wide ranging, but the underlining sensing principle and system characteristics are very similar. This presentation will review the recent advances in electrostatic sensors and associated signal processing algorithms for industrial measurement and monitoring applications. The fundamental sensing principle and characteristics of electrostatic sensors will be introduced. A number of practical applications of electrostatic sensors will be presented. These include pulverized fuel flow metering, linear and rotational speed measurement, condition monitoring of mechanical systems, and advanced flame monitoring. Results from recent experimental and modelling studies as well as industrial trials of electrostatic sensors will be reported. Close Andrew Taberner Distinguished Lecturer 2019 - 2022 Talk(s) A Dynamometer for the Heart A Dynamometer for the Heart × The heart is a complex organic engine that converts chemical energy into work. Each heartbeat begins with an electrically-released pulse of calcium, which triggers force development and cell shortening, at the cost of energy and oxygen, and the dissipation of heat. My group has developed new instrumentation systems to measure all of these processes simultaneously while subjecting isolated samples of heart tissue to realistic contraction patterns that mimic the pressure-volume-time loops experienced by the heart with each beat. These devices are effective 'dynamometers' for the heart, that allow us to measure the performance of the heart and its tissues, much in the same way that you might test the performance of your motor vehicle on a 'dyno.' This demanding undertaking has required us to develop our own actuators, force transducers, heat sensors, and optical measurement systems. Our instruments make use of several different measurement modalities which are integrated in a robotic hardware-based real-time acquisition and control environment and interpreted with the aid of a computational model. In this way, we can now resolve (to within a few nanoWatts) the heat released by living cardiac muscle fibers as they perform work at 37 °C. Muscle force and length are controlled and measured to microNewton and nanometer precision by a laser interferometer, while the muscle is scanned in the view of an optical microscope equipped with a fluorescent calcium imaging system. Concurrently, the changing muscle geometry is monitored in 4D by a custom-built optical coherence tomograph, and the spacing of muscle-proteins is imaged in real-time by transmission-microscopy and laser diffraction systems. Oxygen consumption is measured using fluorescence-quenching techniques. Equipped with these unique capabilities, we have probed the mechano-energetics of failing hearts from rats with diabetes. We have found that the peak stress and peak mechanical efficiency of tissues from these hearts was normal, despite prolonged twitch duration. We have thus shown that the compromised mechanical performance of the diabetic heart arises from a reduced period of diastolic filling and does not reflect either diminished mechanical performance or diminished efficiency of its tissues. In another program of research, we have demonstrated that despite claims to the contrary, dietary supplementation by fish-oils has no effect on heart muscle efficiency. Neither of these insights was fully revealed until the development of this instrument. Close Optical Sensing in Bioinstrumentation Optical Sensing in Bioinstrumentation × Optical sensors and techniques are used widely in many areas of instrumentation and measurement. Optical sensors are often, conveniently, ‘non-contact’, and thus impose negligible disturbance of the parameter undergoing measurement. Valuable information can be represented and recorded in space, time, and optical wavelength. They can provide exceptionally high spatial and/or temporal resolution, high bandwidth, and range. Moreover, optical sensors can be inexpensive and relatively simple to use. At the Bioinstrumentation Lab at the Auckland Bioengineering Institute, we are particularly interested in developing techniques for measuring parameters from and inside and outside the body. Such measurements help us to quantify physiological performance, detect and treat disease, and develop novel medical and scientific instruments. In making such measurements we often draw upon and develop our own optical sensing and measurement methods – from interferometry, fluorimetry and diffuse light imaging, to area-based and volume-based optical imaging and processing techniques. In this talk, I will overview some of the new interesting optically-based methods that we have recently developed for use in bioengineering applications. These include 1) diffuse optical imaging methods for monitoring the depth of a drug as it is rapidly injected through the skin, without requiring a needle; 2) stretchy soft optical sensors for measuring strains of up to several 100 % during movement; 3) multi-camera image registration techniques for measuring the 3D shape and strain of soft tissues; 4) optical coherence tomography techniques for detecting the 3D shape of deforming muscle tissues, and 5) polarization-sensitive imaging techniques for classifying the optical and mechanical properties of biological membranes. While these techniques sensors and techniques have been motivated by applications in bioengineering, the underlying principles have broad applicability to other areas of instrumentation and measurement. Close Degang Chen Distinguished Lecturer 2018 - 2021 Talk(s) Accurate Linearity Testing for High Performance Data Converters using Significantly Reduced Measurement Time and Relaxed Instrumentation Accurate Linearity Testing for High Performance Data Converters using Significantly Reduced Measurement Time and Relaxed Instrumentation × Semiconductor chip manufacturing cost consists of die cost, package cost, and test cost. The trends of increasing design complexity, increasing quality needs, and new process nodes and defect models are pushing test cost to the forefront. This is especially true for high-resolution data converters, whose accurate testing requires expensive instruments and is extremely time-consuming. As a result, linearity test of data converters often dominates the overall test cost of SoCs. This talk will present several recently developed techniques for reducing linearity test cost by dramatically reducing measurement time and dramatically relaxing instrumentation requirements. The IEEE standard for ADC linearity test requires the stimulus signal to be at least 10 times more accurate than the ADC under test. To relax this stringent requirement, the SEIR (stimulus error identification and removal) algorithm is developed to accurately test high-resolution ADCs using nonlinear stimuli. It has been demonstrated by industries that more than 16 bits of ADC test accuracy was achieved using 7-bit linear ramps instead of 20-bit linear ramps as required by IEEE, a relaxation of well over 1000 times on the instrumentation accuracy requirement. The biggest contributor to test cost is the long measurement time. The recently developed uSMILE (ultrafast Segmented Model Identification for Linearity Errors) algorithm can dramatically reduce the measurement time needed for ADC linearity test. With a system identification approach using a segmented model for the integral nonlinearity, the algorithm can reduce the test time by a factor of over 100 and still achieve test accuracies superior to the standard histogram test method. This method has been extensively validated by industry and has been adopted for production test for multiple product families. By combining the salient features of both SEIR and uSMILE, the ultrafast stimulus error removal and segmented model identification of linearity errors (USER- SMILE) algorithm are developed. The USER-SMILE algorithm uses two nonlinear signals as input to the ADC under test. One signal is shifted by a constant voltage with respect to the other nonlinear signal. By subtracting the two sets of output codes, input signal is canceled and the nonlinearity of ADC, modeled by a segmented non-parametric INL model, will be identified with the least square method. A completely on-chip ADC BIST circuit is developed based on the USER-SMILE algorithm and demonstrated on a 28nm CMOS automotive microcontroller. The ADC test subsystem includes a nonlinear DAC as a signal generator, a built-in voltage shift generator, a BIST computation engine, and dedicated memory cells. The silicon measurement results show accurate test results. The INL test results are further used to correct ADC linearity errors, thus providing a method for reliably calibrating the ADC. Measurement results demonstrated that the BIST-based calibration method achieved >10dB THD/SFDR improvements over the existing calibration method used by industry. Close Jacob Scharcanski Distinguished Lecturer 2018 - 2021 Talk(s) Computer Vision in Medical Imaging Measurements: Making Sense of Visual Data Computer Vision in Medical Imaging Measurements: Making Sense of Visual Data × In this talk, we discuss how computer vision can facilitate the interpretation of medical imaging data, or help to make inferences based on models of such data. In order to illustrate this presentation, several applications of medical imaging measurements and modeling are discussed, focusing in areas such as the correction of imaging artifacts that may occlude visual information, tumor detection, modeling, and measurement in different imaging modalities. When interpreting medical imaging data with computer vision, usually we are trying to describe anatomic structures (or medical phenomena) using one or more images, and reconstruct some of its properties based on imaging data (like shape, texture, or color). Actually, this is an ill-posed problem that humans can learn to solve effortlessly, but computer algorithms often are prone to errors. Nevertheless, in some cases, computers can surpass humans and interpret medical images more accurately, given the proper choice of models, as we will show in this talk. Reconstructing interesting properties of real-world objects or phenomena from captured imaging data involves solving an inverse problem, in which we seek to recover some unknowns given insufficient information to specify a unique solution. Therefore, we disambiguate between possible solutions relying on models based on physics, mathematics, or statistics. Modeling the real world in all its complexity still is an open problem. However, if we know the phenomenon or object of interest, we can construct detailed models using specialized techniques and domain-specific representations, that are efficient at describing reliably the measurements (or obtaining measurements in some cases). In this talk, we briefly overview some challenging problems in computer vision for medical imaging and measurements, with illustrations and insights about model selection and model-based prediction. Some of the applications discussed in this talk are: modeling tumor shape and size, and making inferences about its future growth or shrinkage; modeling relevant details in the background of medical images to discriminate them from useless background noise, and modeling shading artifacts to minimize their influence when detecting and measuring skin lesions in standard camera images. Medical images contain a wealth of information, which makes modeling of medical images a challenging task. Therefore, medical images often are segmented into multiple elementary parts, simplifying their representation and changing the image model into something that is more meaningful, or easier to analyze and measure (e.g. by describing the boundaries of the object by lines or curves, or the image segments by their textures, colors, etc.). Nevertheless, these simpler image elements may be easy to perceive visually but difficult to describe. For example, the texture of a skin lesion may not have an identifiable texture element or a model known a priori, and regardless of that skin lesion detection must be accurate and precise. Segmentation of medical imaging data segmentation and analysis still is an open question, and some current directions are discussed in this talk. Computer vision and modeling are interrelated. Modeling imaging measurements often involve errors, and estimating the expected error of a model can be important in applications (e.g. estimating a tumor size and its potential growth, or shrinkage, in response to treatment). This issue can be approached by adapting machine learning and pattern recognition techniques to solve problems in medical imaging measurements. Typically, a model has tuning parameters, and these tuning parameters may change the model complexity. We wish to minimize modeling errors and the model complexity, in other words, to get the ‘big picture’ we often sacrifice some of the small details. For example, estimating tumor growth (or shrinkage) in response to treatment requires modeling the tumor shape and size, which can be challenging for real tumors, and simplified models may be justifiable if the predictions obtained are informative (e.g. to evaluate the treatment effectiveness). To conclude this talk, we outline the current trends in computer vision in medical imaging measurements and discuss some open problems. Close Reza Zoughi Distinguished Lecturer 2018 - 2021 Talk(s) Evolution of Microwave and Millimeter Wave Imaging for NDE Applications Evolution of Microwave and Millimeter Wave Imaging for NDE Applications × Microwave and millimeter-wave signals span the frequency range of ~300 MHz to 300 GHz, corresponding to a wavelength range of 1000 mm to 1 mm. Signals at these frequencies can easily penetrate inside dielectric materials and composites and interact with their inner structures. The relatively small wavelengths and wide bandwidths associated with these signals enable the production of high spatial-resolution images of materials and structures. Incorporating imaging techniques such as lens-focused and near-field techniques, synthetic aperture focusing, holographical methods based on robust back-propagation algorithms with more advanced and unique millimeter wave imaging systems have brought upon a flurry of activities in this area and in particular for nondestructive evaluation (NDE) applications. These imaging systems and techniques have been successfully applied for a wide range of critical NDE-related applications. Although, near-field techniques have also been prominently used for these applications in the past, undesired issues related to changing standoff distance and slowness of image production process have resulted in several innovative and automatic standoff distance variation removal techniques. Ultimately, imaging techniques must produce high-resolution 3D images, become real-time, and be implemented using portable systems. To this end and to expedite the imaging process while providing a high-resolution image, the design and demonstration of a 6” by 6” one-shot, rapid and portable imaging system (Microwave Camera), consisting of 576 resonant slot elements, was demonstrated a few years ago. Subsequently, efforts were expended to design and implement several different variations of this imaging system to accommodate one-sided and mono-static imaging, while enabling 3D image production using non-uniform rapid scanning of an object, as well as increasing the operating frequency into higher millimeter-wave frequencies. These efforts have led to the development of a real-time, portable, high-resolution and 3D imaging microwave camera operating in the 20-30 GHz frequency range which was recently completed. This presentation provides an overview of these techniques, along with illustration of several typical examples where these imaging techniques have effectively provided viable solutions to many critical NDE problems. Close Mihaela Albu Distinguished Lecturer 2016 - 2022 Talk(s) High Reporting Rate Measurements for Smart[er] Grids High Reporting Rate Measurements for Smart[er] Grids × Modern control algorithms in the emerging power systems process information delivered mainly by distributed, synchronized measurement systems, and available in data streams with different reporting rates. Multiple measurement approaches are used: on one side, the existing time-aggregation of measurements are offered by currently deployed IEDs (SCADA framework), including smart meters and other emerging units; on the other side, the high-resolution waveform-based monitoring devices like phasor measurement units (PMUs) use high reporting rates (50 frames per second or higher) and can include fault-recorder functionality. There are several applications where synchronized data received with a high reporting rate has to be used together with aggregated data from measurement equipment having a lower reporting rate (complying with power quality data aggregation standards) and the accompanying question is how adequate are the energy transfer models in such cases. For example, state estimators need both types of measurements: the so-called “classical” one, adapted for a de facto steady-state paradigm of relevant quantities, and the “modern” one, i.e. with fewer embedded assumptions on the variability of same quantities. Another example is given by emerging active distribution grids operation, which assumes higher variability of the energy transfer and consequently, a new model approximation for its characteristic quantities (voltages, currents) is needed. Such a model is required not only in order to be able to correctly design future measurement systems but also for better assessing the quality of existing “classical” measurements, still in use for power quality improvement, voltage control, frequency control, network parameters’ estimation, etc. The main constraint so far is put by the existing standards where several aggregation algorithms are recommended, with a specific focus on the information compression. The further processing of RMS values (already the output of a filtering algorithm) results in significant signal distortion. Presently there is a gap between (i) the level of approximation used for modeling the current and voltage waveforms which are implicitly assumed by most of the measurement devices deployed in power systems and (ii) the capabilities and functionalities exhibited by the high fidelity, high accuracy and a high number of potential reporting rates of the newly deployed synchronized measurement units. The talk will address: o The measurement paradigm in power systems; System inertia, real-time and steady-state Instrument transformers; limited knowledge on the infrastructure PQ, SCADA, and PMUs Power system state estimation; WAMCS IEDs, PMUs, microPMUs Time-stamped versus synchronized measurements o Measurement channel quality and models for energy transfer Voltage and frequency variability; rate of change of frequency The steady-state signal and rapid voltage changes (RVC); RMS-values reported with 100 frames/s; Measurement data aggregation; filtering properties Time- aggregation algorithms in the PQ framework Statistical approaches; o Applications and challenges Communication channel requirements; delay assessment in WAMCS Smart metering with high reporting rate (1s) The presentation provides an overview of these techniques, with examples from worldwide measurement solutions for smart grids deployment. Close DL Toolbox Our Distinguished Lecturer Toolbox contains essential resources such as guidelines, forms, and process documents. DL Toolbox Past Lecturers Our complete Distinguished Lecturer List contains past and current DLs and their talk titles. View Complete DL List DL Reports Please review the DL reports and take a peek at the pictures by sending a request to the DLP Chair.