IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Modern control algorithms in the emerging power systems process information delivered mainly by distributed, synchronized measurement systems, and available in data streams with different reporting rates. Multiple measurement approaches are used: on one side, the existing time-aggregation of measurements are offered by currently deployed IEDs (SCADA framework), including smart meters and other emerging units; on the other side, the high-resolution waveform-based monitoring devices like phasor measurement units (PMUs) use high reporting rates (50 frames per second or higher) and can include fault-recorder functionality.
There are several applications where synchronized data received with a high reporting rate has to be used together with aggregated data from measurement equipment having a lower reporting rate (complying with power quality data aggregation standards) and the accompanying question is how adequate are the energy transfer models in such cases. For example, state estimators need both types of measurements: the so-called “classical” one, adapted for a de facto steady-state paradigm of relevant quantities, and the “modern” one, i.e. with fewer embedded assumptions on the variability of same quantities. Another example is given by emerging active distribution grids operation, which assumes higher variability of the energy transfer and consequently, a new model approximation for its characteristic quantities (voltages, currents) is needed. Such a model is required not only in order to be able to correctly design future measurement systems but also for better assessing the quality of existing “classical” measurements, still in use for power quality improvement, voltage control, frequency control, network parameters’ estimation, etc.
The main constraint so far is put by the existing standards where several aggregation algorithms are recommended, with a specific focus on the information compression. The further processing of RMS values (already the output of a filtering algorithm) results in significant signal distortion.
Presently there is a gap between (i) the level of approximation used for modeling the current and voltage waveforms which are implicitly assumed by most of the measurement devices deployed in power systems and (ii) the capabilities and functionalities exhibited by the high fidelity, high accuracy and a high number of potential reporting rates of the newly deployed synchronized measurement units.
The talk will address:
o The measurement paradigm in power systems;
o Measurement channel quality and models for energy transfer
o Applications and challenges
The presentation provides an overview of these techniques, with examples from worldwide measurement solutions for smart grids deployment.
Brain-Computer Interfaces (BCIs) are a novel means of human-computer interaction relying on the direct measurement of brain signals. Possible applications consist of replacing, restoring, improving, enhancing and supplementing the natural outputs of the central nervous system, as well as for brain functions investigation. In considering daily-life constraints, researchers are exploring the possibility to provide visual stimuli by means of smart glasses or visors, which are increasingly exploited in extended reality (XR). Moreover, to detect the elicited potentials, commercial devices for electroencephalography (EEG) are taken into account. Nevertheless, those studies were more application-oriented and they did not deal with a metrological characterization of the stimulation and detection equipment.
In bridging this gap, metrology was considered for a significant enhancement of the BCI systems both in terms of designing and in operational understanding. It was demonstrated that, although often overlooked, applied metrology plays an important role in this field. Indeed, if the stimulation and detection equipment is not fully characterized, the measures of interest for the brain-computer interface system may result in misleading interpretation of the brain functioning. Instead, by means of the mentioned results, one can compare the measured brain signals with the behaviour of the equipment in the time and frequency domains, so to correctly identify the contribution of the “human transducer” in the BCI measurement chain.
Nowadays when the global population is growing by more than 80 million a year reported studies are predicting an increasing pressure on the planet's natural resources including food resources. The situation is going worst when unpredictable meteorologic events are running up in the context of great climate changes related to the global effect of anthropogenic greenhouse emissions. In this context precision agriculture (PA) combines technologies and practices to optimize agricultural production through specific farm management are considered.
At the same PA focuses on the accuracy of operations considering the place, time to act and method to be applied. Agricultural operations are carried out to reach the production goals using the information provided by the smart sensors and instrumentation increasing the sustainability of operations. Distributed smart sensing system characterized by fixed and mobile nodes (associated with Unnamed Aerial Vehicle (UAV)) is used to turn farming operations into data, and to make future operations a data-driven one. These new including edge and cloud computing that are capable to run artificial intelligence algorithms may contribute to a slight replacement of human decisions based on their accumulated experience with a machine-based decision. This new way to act in agriculture in a digital form combining technologies such as smart sensors, cloud, and mobile computing, data science is related to the fact that classical decisions cannot be applied nowadays when the cultivated areas are much extended, and the adverse meteorological events are occurring frequently that conduct to miss-management with yield losses.
Using smart sensors computation and data analysis the applied quantity of water and fertilizers is optimized. Weather stations could provide additional information such as ambient temperature, relative humidity, and wind velocity that are also used together soil measured quantities such as moisture, pH, conductivity, temperature, and macronutrients concentration (Nitrogen, Potassium, Calcium) to create models to be used for farm operation optimization. Data from distributed sensing systems on the crop field can be also used to avoid plant stress phenomena (e.g. plant water stress). Data mining is successfully applied in PA being associated with data analysis of massive data.
In this talk, we’ll see together the meaning of precision agriculture in the context of heavy uncertainty associated with climate change. IoT ecosystem for precision agriculture will be discussed including multimodal sensing and artificial intelligence. Referring to sensing as part of the IoT ecosystem in-situ and remote sensing is considered. The agriculture UAV imagery and satellite imagery solutions as so as the relation between the data coming from the smart sensors distributed in the field and acquired images using multispectral imagery techniques will be part of the presentation. Metrological characteristics of smart sensors as so as the calibration procedure for in-situ and remote measurement smart sensing systems will be part of the talk.
Another important technology associated with innovative precision agriculture is related to the development of AI data-driven models for farming operations considering data coming from different sources Examples of data-driven models for smart irrigation and nutrient delivery will be considered.
Challenges to precision agriculture adoption by regular farmers and how the agricultural operation can support the important transformation to become more environmentally sustainable for increased crop quality will be discussed. A specific part of the talk will be climate change, and how this reality will affect the adoption of smart sensing and AI technologies for PA.
Thermal imaging, or thermography, consists in measuring and imaging the thermal radiation emitted by every object above the absolute zero temperature. As this radiation is temperature-dependent, the infrared images recorded can be converted into temperature maps, or thermograms, allowing retrieving valuable information about the object under investigation. Thermal imaging has been known since the middle of the 20th century and recent technological achievements concerning the infrared imaging devices, together with the development of new procedures based on transient thermal emission measurements revolutionized the field. Nowadays, thermography is a method whose advantages are undisputed in engineering. It is routinely used for the non-destructive testing of materials, to investigate electronic components, or in the photovoltaic industry to detect defects in solar cells.
Despite an early interest, thermal imaging is currently rarely used for biomedical applications and even less in clinical settings. One reason is probably the initial disappointing results obtained solely with static measurement procedures, where the sample is investigated in its steady state, and using unhandy and performance-limited first-generation infrared cameras. In addition, the retrieval of quantitative data using dynamic thermal imaging procedures often requires complex mathematical modelling of the sample which can be demanding in biomedical applications due to the large variability intrinsic to life science field.
The goal of this lecture is to a) demonstrate the potential of dynamic thermal imaging for biomedical applications and b) give the reader the necessary background to successfully translate the technology to his/her specific biomedical applications.
In a first step, the basics of thermal radiation and thermal imaging device technology will be reviewed. Rather than giving an exhaustive description of the technology, we aim to familiarize the reader with key concepts that will allow selecting an optimal infrared camera depending on the specific application. In a subsequent part, we will present the foundation of thermodynamics needed to understand and be able to mathematically model heat transfer processes happening inside the sample under investigation and between the sample and its environment. Such thermal exchanges are responsible for the sample surface temperature. As next steps, we will present in detailed and compare the different procedures used in dynamic thermal imaging. Dynamic thermal imaging means that the sample surface thermal emission is monitored in its transient state and exhibit superior capabilities compared to passive thermal imaging. The thermal stimulation can be achieved with different modalities depending on the sample under investigation (LASER or flash lamps to investigate thin coatings, alternating magnetic fields to detect magnetic material, microwave to heat up water, or ultrasound to monitor cracks). Various procedures are possible: stepped- and pulsed-thermal imaging, pulsed-phase and lock-in thermal imaging. Each approach exhibiting specific characteristics in term of signal to noise ratio or measurement duration.
As an illustration, we will demonstrate how lock-in thermal imaging can be advantageously used to build extremely sensitive instruments to detect and characterize stimuli-responsive nanoparticles (both plasmonic and magnetic) in complex environments like cell cultures, tissue or food. In this example, we will present the research instrument in detail with the choice of the various components, the digital lock-in demodulation implemented, the mathematical modelling of the sample required to extract quantitative information as well as the resulting setup performances. The goal being to allow the reader to translate the dynamic thermal imaging measurement principles to its own biomedical application.
Impedance Spectroscopy is a measurement method used in many fields of science and technology including chemistry, medicine, and material sciences. The possibility to measure the complex impedance over a wide frequency range involves interesting opportunities for separating different physical effects, accurate measurements, and measurements of non-accessible quantities. Especially by sensors, a multifunctional measurement can be realized so that more than one quantity can be measured at the same time and the measurement accuracy and reliability can be significantly improved.
In order to realize impedance spectroscopy-based solutions, several aspects should be carefully addressed such as measurement procedures, modelling and signal processing, parameter extraction. Development of suitable impedance models and extraction of target information by optimization techniques is one of the most used approaches for calculation of target quantities.
Different presentations can be provided to specific topics to show the chances of application of this method in the fields of battery diagnosis, bioimpedance, sensors, and material sciences. The aim is to attract scientists to be able to apply impedance spectroscopy in different fields of instrumentation and measurement in an adequate way.
Optical Instrumentation, computer vision, and augmented reality are powerful platform technologies. In this lecture, we will discuss how these technologies can be used for medical applications. I will give an overview of the technologies and current challenges relevant to medical and surgical settings. The recent advances in image acquisition, computer vision, photonics, and instrumentation present the scientific community with the opportunity to develop new systems to impact healthcare. Leveraging an integrated design, advantages of hardware and software approach can be combined, and shortcomings can be complemented. I will present new approaches of fluorescence imaging for surgical applications. We will discuss hardware instrumentation, algorithm development, and system deployment. New development in multimodal imaging and image registration will also be discussed. For example, a combination of real-time intraoperative optical imaging and CT-based surgical navigation represents a promising approach for clinical decision support. Integration of 3D imaging and augmented reality provides surgeons with an intuitive way to visualize surgical data. In addition to technological development, I will discuss the clinical translation of systems and cross-disciplinary collaboration. Interdisciplinary approaches to solving complex problems in surgically relevant settings will be described.
Industry 4.0 is considered the great revolution of the past few years. New technologies, the Internet of things, the possibility to monitor everything from everywhere changed both plants and the approaches to the industrial production. Medicine is considered a slowly changing discipline. The human body model is a difficult concept to develop. But we can identify some passages in which medicine can be compared to industry. Four major changes revolutionized medicine:
Medicine 1.0: James Watson and Francis Crick described the structure of DNA. This was the beginning of research in the field of molecular and cellular biology
Medicine 2.0: Sequencing the Human genome. This discovery made it possible to find the origin of the diseases.
Medicine 3.0: The convergence of biology and engineering. Now the biologist’s experience can be combined with the technology of the engineers. New approaches to new forms of analysis can be used.
Medicine 4.0: Digitalization of Medicine: IOT devices and techniques, AI to perform analyses, Machine Learning for diagnoses, Brain Computer Interface, Smart wearable sensors.
Medicine 4.0 is definitely a great revolution in the patient care. New horizons are possible today. Covid 19 has highlighted problems that have existed for a long time. Relocation of services, which means remote monitoring, remote diagnoses without direct contact between the doctor and the patient. Hospitals are freed from routine tests that could be performed by patients at home and reported by doctors on the internet. Potential dangerous conditions can be prevented. During the Covid emergency everybody can check his condition and ask for a medical visit (swab) only when really necessary. This is true telemedicine. This is not a whatsapp where an elder tries to chat with a doctor. This is a smart device able to measure objective vital parameters and send to a health care center. Of course Medicine 4.0 requires new technologies for smart sensors. These devices need to be very easy to use, fast, reliable and low cost. They must be accepted by both people and doctors.
In this talk we’ll see together the meaning of telemedicine and E-Health. E-health is the key to allowing people to self monitor their vital signals. Some devices already exist but a new approach will allow to everybody (especially older people with cognitive difficulties) to use these systems with a friendly approach. Telemedicine will be the new approach to the concept of hospital. A virtual hospital, without any physical contact but with an objective measurement of every parameter. A final remote discussion between the doctor and the patient is still required to feel comfortable. But the doctor will have all the vital signal recorded to allow him to make a diagnosis based on reliable data.
Another important aspect of medicine 4.0 is the possibility of using AI both to perform parameter measurement and to manage the monitoring of multiple patients. The new image processing based on Artificial Neural Networks allows doctors to have a better and faster analysis. But AI algorithms are also able to manage intensive care rooms with several patients reducing the number of doctors involved in the global monitoring of the situation.
• A basic introduction to the sense-plan-act challenges of autonomous vehicles • Introduction to the most common state-of-the-art sensors used in autonomous driving (radar, camera, lidar, GPS, odometry, vehicle-2-x) in terms of benefits and disadvantages along with mathematical models of these sensors
Autonomous driving is seen as one of the pivotal technologies that considerably will shape our society and will influence future transportation modes and quality of life, altering the face of mobility as we experience it by today. Many benefits are expected ranging from reduced accidents, optimized traffic, improved comfort, social inclusion, lower emissions, and better road utilization due to efficient integration of private and public transport. Autonomous driving is a highly complex sensing and control problem. State-of-the-art vehicles include many different compositions of sensors including radar, cameras, and lidar. Each sensor provides specific information about the environment at varying levels and has an inherent uncertainty and accuracy measure. Sensors are the key to the perception of the outside world in an autonomous driving system and whose cooperation performance directly determines the safety of such vehicles. The ability of one isolated sensor to provide accurate reliable data of its environment is extremely limited as the environment is usually not very well defined. Beyond the sensors needed for perception, the control system needs some basic measure of its position in space and its surrounding reality. Real-time capable sensor processing techniques used to integrate this information have to manage the propagation of their inaccuracies, fuse information to reduce the uncertainties and, ultimately, offer levels of confidence in the produced representations that can be then used for safe navigation decisions and actions.
• Overview of different sensor data fusion taxonomies as well as different ways to model the environment (dynamic object tracking vs. occupancy grid) in the Bayesian framework including uncertainty quantification • Exploiting potential problems of sensor data fusion, e.g. data association, outlier treatment, anomalies, bias, correlation, or out-of-sequence measurements • Propagation of uncertainties from object recognition to decision making based on selected examples, e.g. the real-time vehicle pose estimation based on uncertain measurements of different sources (GPS, odometry, lidar) including the discussion of fault detection and localization (sensor drift, breakdown, outliers etc.)
Sensor fusion overcomes the drawbacks of current sensor technology by combining information from many independent sources of limited accuracy and reliability. This makes the system less vulnerable to random and systematic failures of a single component. Multi-source information fusion avoids the perceptual limitations and uncertainties of a single sensor and forms a more comprehensive perception and recognition of the environment including static and dynamic objects. Through sensor fusion we combine readings from different sensors, remove inconsistencies and combine the information into one coherent structure. This kind of processing is a fundamental feature of all animal and human navigation, where multiple information sources such as vision, hearing and balance are combined to determine position and plan a path to a destination. In addition, several readings from the same sensor are combined, making the system less sensitive to noise and anomalous observations. In general, multi-sensor data fusion can achieve an increased classification accuracy of objects, improved state estimation accuracy, improved robustness for instance in adverse weather conditions, an increased availability, and an enlarged field of view. Emerging applications such as autonomous driving systems that are in direct contact and interact with the real world, require reliable and accurate information about their environment in real-time.
Over the past three decades a wide range of electrostatic sensors have been developed and utilized for the continuous monitoring and measurement of various industrial processes. Electrostatic sensors enjoy simplicity in structure, cost-effectiveness and suitability for a variety of process conditions. They either provide unique solutions to some measurement challenges or offer more cost-effective or complementary options to established sensors such as those based on acoustic, capacitive, electromagnetic or optical principles. The established or potential applications of electrostatic sensors appear wide ranging, but the underlining sensing principle and system characteristics are very similar. This presentation will review the recent advances in electrostatic sensors and associated signal processing algorithms for industrial measurement and monitoring applications. The fundamental sensing principle and characteristics of electrostatic sensors will be introduced. A number of practical applications of electrostatic sensors will be presented. These include pulverized fuel flow metering, linear and rotational speed measurement, condition monitoring of mechanical systems, and advanced flame monitoring. Results from recent experimental and modelling studies as well as industrial trials of electrostatic sensors will be reported.
Over the past ten years, various machine learning techniques have been incorporated into a range of measurement systems for multiphase flow measurement and combustion process monitoring. Such techniques in conjunction with low-cost sensors and sensor arrays provide either unique solutions to some measurement challenges or offer more cost-effective or complementary options to other possible methods. The established or potential applications of machine learning in measurement and instrumentation appear wide-ranging, but the underlining principle, advantages, and limitations are similar. This presentation will review recent advances in the applications of data-driven modeling techniques to the measurement of gas-liquid, gas-solids, and liquid-solids mixture flows and the advanced monitoring of combustion processes. These include the mass flow rate measurement of air-oil two-phase flow, carbon dioxide two-phase flow, slurry flow, and pneumatically conveyed pulverized fuel in a range of industrial sectors. Meanwhile, systems that incorporate machine learning algorithms for the online continuous identification of pulverized fuel, burner condition monitoring, and combustion plant optimization will be introduced. Results from recent experimental programs and trials on industrial-scale test plants will be reported.
The electromagnetic properties (permittivity and permeability) of a material determine how the material interacts with an electromagnetic field. The knowledge of these properties and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of the electromagnetic properties has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. Besides these traditional applications, the knowledge of the electromagnetic properties has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the electromagnetic properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the electromagnetic properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Electromagnetic properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of material results in a change in its electromagnetic properties. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the electromagnetic properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has different electromagnetic properties than healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Measurement techniques typically involve placing the material in an appropriate sample holder and determining the permittivity from measurements made on the sample holder. The sample holder can be a parallel plate or coaxial capacitor, a resonant cavity or a transmission line. These structures are used because the relationship between the electromagnetic properties and measurements are fundamental and well understood. One disadvantage of these types of sample holders is that many materials cannot be easily placed in them. Sample preparation is almost always required. This limits their use in real-time monitoring of processes. Another disadvantage is that several of these sample holders are usable only over a narrow frequency range. Extracting physical properties from electromagnetic property measurements often requires measurements made over a wide frequency range. Techniques for which this relationship, between electromagnetic properties and measurements, is not as straightforward have also been employed. One of these techniques is the open-ended coaxial-line probe. This technique has attracted much attention because of its applicability to nondestructive testing over a relatively broad frequency range. It can be used to measure a wide variety of materials including liquids, solids and semisolids. These attributes make it a very attractive technique for measuring biological, agriculture and food materials. In its simplest form, it consists of a coaxial cable without a connector attached to one end. This end is inserted into the material being measured. All of these measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
The permittivity (dielectric properties) of a material is one of the factors that determine how the material interacts with an electromagnetic field. The knowledge of the dielectric properties of materials and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of permittivity has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. For the typical electrical engineer permittivity is a number that is needed to solve Maxwell’s equations. One of the purposes of this presentation is to give an explanation of why a material has a particular permittivity. The short answer is that a material has a particular permittivity because of its molecular structure. Another is how the permittivity can be related to other physical material properties. The knowledge of permittivity has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the dielectric properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the dielectric properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Dielectric properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of a material results in a change in its permittivity. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the dielectric properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has a different permittivity from healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Techniques for measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.