IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
• A basic introduction to the sense-plan-act challenges of autonomous vehicles • Introduction to the most common state-of-the-art sensors used in autonomous driving (radar, camera, lidar, GPS, odometry, vehicle-2-x) in terms of benefits and disadvantages along with mathematical models of these sensors
Autonomous driving is seen as one of the pivotal technologies that considerably will shape our society and will influence future transportation modes and quality of life, altering the face of mobility as we experience it by today. Many benefits are expected ranging from reduced accidents, optimized traffic, improved comfort, social inclusion, lower emissions, and better road utilization due to efficient integration of private and public transport. Autonomous driving is a highly complex sensing and control problem. State-of-the-art vehicles include many different compositions of sensors including radar, cameras, and lidar. Each sensor provides specific information about the environment at varying levels and has an inherent uncertainty and accuracy measure. Sensors are the key to the perception of the outside world in an autonomous driving system and whose cooperation performance directly determines the safety of such vehicles. The ability of one isolated sensor to provide accurate reliable data of its environment is extremely limited as the environment is usually not very well defined. Beyond the sensors needed for perception, the control system needs some basic measure of its position in space and its surrounding reality. Real-time capable sensor processing techniques used to integrate this information have to manage the propagation of their inaccuracies, fuse information to reduce the uncertainties and, ultimately, offer levels of confidence in the produced representations that can be then used for safe navigation decisions and actions.
• Overview of different sensor data fusion taxonomies as well as different ways to model the environment (dynamic object tracking vs. occupancy grid) in the Bayesian framework including uncertainty quantification • Exploiting potential problems of sensor data fusion, e.g. data association, outlier treatment, anomalies, bias, correlation, or out-of-sequence measurements • Propagation of uncertainties from object recognition to decision making based on selected examples, e.g. the real-time vehicle pose estimation based on uncertain measurements of different sources (GPS, odometry, lidar) including the discussion of fault detection and localization (sensor drift, breakdown, outliers etc.)
Sensor fusion overcomes the drawbacks of current sensor technology by combining information from many independent sources of limited accuracy and reliability. This makes the system less vulnerable to random and systematic failures of a single component. Multi-source information fusion avoids the perceptual limitations and uncertainties of a single sensor and forms a more comprehensive perception and recognition of the environment including static and dynamic objects. Through sensor fusion we combine readings from different sensors, remove inconsistencies and combine the information into one coherent structure. This kind of processing is a fundamental feature of all animal and human navigation, where multiple information sources such as vision, hearing and balance are combined to determine position and plan a path to a destination. In addition, several readings from the same sensor are combined, making the system less sensitive to noise and anomalous observations. In general, multi-sensor data fusion can achieve an increased classification accuracy of objects, improved state estimation accuracy, improved robustness for instance in adverse weather conditions, an increased availability, and an enlarged field of view. Emerging applications such as autonomous driving systems that are in direct contact and interact with the real world, require reliable and accurate information about their environment in real-time.
The electromagnetic properties (permittivity and permeability) of a material determine how the material interacts with an electromagnetic field. The knowledge of these properties and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of the electromagnetic properties has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. Besides these traditional applications, the knowledge of the electromagnetic properties has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the electromagnetic properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the electromagnetic properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Electromagnetic properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of material results in a change in its electromagnetic properties. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the electromagnetic properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has different electromagnetic properties than healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Measurement techniques typically involve placing the material in an appropriate sample holder and determining the permittivity from measurements made on the sample holder. The sample holder can be a parallel plate or coaxial capacitor, a resonant cavity or a transmission line. These structures are used because the relationship between the electromagnetic properties and measurements are fundamental and well understood. One disadvantage of these types of sample holders is that many materials cannot be easily placed in them. Sample preparation is almost always required. This limits their use in real-time monitoring of processes. Another disadvantage is that several of these sample holders are usable only over a narrow frequency range. Extracting physical properties from electromagnetic property measurements often requires measurements made over a wide frequency range. Techniques for which this relationship, between electromagnetic properties and measurements, is not as straightforward have also been employed. One of these techniques is the open-ended coaxial-line probe. This technique has attracted much attention because of its applicability to nondestructive testing over a relatively broad frequency range. It can be used to measure a wide variety of materials including liquids, solids and semisolids. These attributes make it a very attractive technique for measuring biological, agriculture and food materials. In its simplest form, it consists of a coaxial cable without a connector attached to one end. This end is inserted into the material being measured. All of these measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
The permittivity (dielectric properties) of a material is one of the factors that determine how the material interacts with an electromagnetic field. The knowledge of the dielectric properties of materials and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of permittivity has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. For the typical electrical engineer permittivity is a number that is needed to solve Maxwell’s equations. One of the purposes of this presentation is to give an explanation of why a material has a particular permittivity. The short answer is that a material has a particular permittivity because of its molecular structure. Another is how the permittivity can be related to other physical material properties. The knowledge of permittivity has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the dielectric properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the dielectric properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Dielectric properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of a material results in a change in its permittivity. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the dielectric properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has a different permittivity from healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Techniques for measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
Metrology is in the very basis of acquiring scientific knowledge. In today’s interdependent world, ensuring uniform metrology inside and across national boundaries is a very important enabling factor of both national and international trade. In electric power systems, measurements of electrical and non-electrical quantities are necessary for their control, protection, and safe and reliable operation. Another very significant application of metrology is in electric energy trade, i.e. in revenue metering for both industrial and residential customers, but also between countries. The impact of distributed power generation, renewable energy resources, and the deregulation of electrical power utilities introduced in many countries will be discussed. An attempt will be made to address the question of what Smart Grids really are, and how they relate to smart metering, synchrophasor measurements, energy storage, and other power system technologies. The role of National Measurements Institutes will be highlighted. New instrumentation and measurement methods for both highest-accuracy and industrial applications for AC electrical power and energy, including high-voltage and high-current calibrations and applications, will be addressed.
Rogowski coils have been used for a long time for monitoring or measurements of high, impulse, and transient currents. Rogowski coils are used for monitoring and control, protective relaying, power distribution switches, electric arc furnaces, electromagnetic launchers, core testing of large rotating electrical machines, partial-discharge measurements in high-voltage cables, power electronics, resistance welding in the automotive industry, and plasma physics. Since their nonmagnetic cores do not saturate, they can operate over wide current ranges with inherent linearity. The applications entail low and high accuracy coils, measuring currents from a few amperes to tens of MA, at frequencies from a fraction of hertz to hundreds of MHz. The increased interest in Rogowski coils over the last decades has led to significant improvements in their design and performance. Their development has included innovative designs, new materials, machining techniques, and printed circuit board structures. This presentation will cover the principles of operation, design, calibration, standards, and applications of Rogowski coils.
Estimating latency between network nodes in the Internet can play a significant role in the improvement of the performance of many applications and services that use latency to make routing decisions. A popular example is peer to peer (P2P) networks, which need to build an overlay between peers in a manner that minimizes the delay of exchanging data among peers. Measurement of latency between peers is therefore a critical parameter that will directly affect the quality of applications such as video streaming, gaming, file sharing, content distribution, server farms, and massively multiuser virtual environments (MMVE) or massively multiplayer online games (MMOG). But acquisition of latency information requires a considerable amount of measurements to be performed at each node in order for that node to keep a record of its latency to all the other nodes. Moreover, the measured latency values are frequently subject to change and need to be regularly repeated in order to be updated against network dynamics. This has motivated the use of techniques that alleviate the need for a large number of empirical measurements and instead try to predict the entire network latency matrix using a small set of latency measurements. Coordinate‐based approaches are the most popular solutions to this problem. The basic idea behind coordinates based schemes is to model the latency between each pair of nodes as the virtual distance among those nodes in a virtual coordinate system.
In this talk, we will cover the basics of how to measure latency in a distributed manner and without the need for a bottleneck central server. We will start by an introduction and background to the field, then we will briefly explain measurement approaches such as Network Time Protocol, Global Positioning System, and the IEEE 1588 Standard, before moving to coordinate based measurement approaches such as GNP (Global Network Positioning), CAN (Content Addressable Network), Lighthouse, Practical Internet Coordinates (PIC), VIVALDI, and Pcoord. In the end, we also propose a new decentralized coordinatebased solution with higher accuracy, mathematically‐proven convergence, and locality‐aware design for lower delay.
The target audiences of this tutorial are practitioners, scientists, and engineers who work with networking systems and applications where there is a need to measure and estimate delay among network nodes, possibly a massive number of nodes (thousands, tens of thousands, or even hundreds of thousands nodes).
A Massively Multiuser Virtual Environment (MMVE) sets out to create an environment for thousands, tens of thousands, or even millions of users to simultaneously interact with each other as in the real world. For example, Massively Multiuser Online Games (MMOGs), now a very profitable sector of the industry and subject to academic and industry research, is a special case of MMVEs where hundreds of thousands of players simultaneously play games with each other. Although originally designed for gaming, MMOGs are now widely used for socializing, business, commerce, scientific experimentation, and many other practical purposes. One could say that MMOGs are the “killer app” that brings MMVE into the realm of eSociety. This is evident from the fact that Virtual currencies such as Linden (or L$) in Second Life are already being exchanged for real-world money. Similarly, virtual goods and virtual real-estate are being bought and sold with real-world money. Massive numbers of users spend their time with their fellow players at online games like EverQuest, Half-Life, World of Warcraft, and Second Life. World of Warcraft, for example, has over twelve million users with a peak of over 500,000 players online at a given. There is no doubt that MMOGs and MMVEs have the potential to be the cornerstone of any eSociety platform in the near future because they bring the massiveness, awareness, and inter-personal interaction of the real society into the digital realm.
In this talk, we focus on approaches for supporting the massive number of users in such environments, consisting of scalability methods, zoning techniques, and areas of interest management. The focus will be on networking and system support and architectures, as well as research challenges still remaining to be solved.
The heart is a complex organic engine that converts chemical energy into work. Each heartbeat begins with an electrically-released pulse of calcium, which triggers force development and cell shortening, at the cost of energy and oxygen, and the dissipation of heat. My group has developed new instrumentation systems to measure all of these processes simultaneously while subjecting isolated samples of heart tissue to realistic contraction patterns that mimic the pressure-volume-time loops experienced by the heart with each beat. These devices are effective 'dynamometers' for the heart, that allow us to measure the performance of the heart and its tissues, much in the same way that you might test the performance of your motor vehicle on a 'dyno.'
This demanding undertaking has required us to develop our own actuators, force transducers, heat sensors, and optical measurement systems. Our instruments make use of several different measurement modalities which are integrated in a robotic hardware-based real-time acquisition and control environment and interpreted with the aid of a computational model. In this way, we can now resolve (to within a few nanoWatts) the heat released by living cardiac muscle fibers as they perform work at 37 °C.
Muscle force and length are controlled and measured to microNewton and nanometer precision by a laser interferometer, while the muscle is scanned in the view of an optical microscope equipped with a fluorescent calcium imaging system. Concurrently, the changing muscle geometry is monitored in 4D by a custom-built optical coherence tomograph, and the spacing of muscle-proteins is imaged in real-time by transmission-microscopy and laser diffraction systems. Oxygen consumption is measured using fluorescence-quenching techniques.
Equipped with these unique capabilities, we have probed the mechano-energetics of failing hearts from rats with diabetes. We have found that the peak stress and peak mechanical efficiency of tissues from these hearts was normal, despite prolonged twitch duration. We have thus shown that the compromised mechanical performance of the diabetic heart arises from a reduced period of diastolic filling and does not reflect either diminished mechanical performance or diminished efficiency of its tissues. In another program of research, we have demonstrated that despite claims to the contrary, dietary supplementation by fish-oils has no effect on heart muscle efficiency. Neither of these insights was fully revealed until the development of this instrument.
Optical sensors and techniques are used widely in many areas of instrumentation and measurement. Optical sensors are often, conveniently, ‘non-contact’, and thus impose negligible disturbance of the parameter undergoing measurement. Valuable information can be represented and recorded in space, time, and optical wavelength. They can provide exceptionally high spatial and/or temporal resolution, high bandwidth, and range. Moreover, optical sensors can be inexpensive and relatively simple to use.
At the Bioinstrumentation Lab at the Auckland Bioengineering Institute, we are particularly interested in developing techniques for measuring parameters from and inside and outside the body. Such measurements help us to quantify physiological performance, detect and treat disease, and develop novel medical and scientific instruments. In making such measurements we often draw upon and develop our own optical sensing and measurement methods – from interferometry, fluorimetry and diffuse light imaging, to area-based and volume-based optical imaging and processing techniques.
In this talk, I will overview some of the new interesting optically-based methods that we have recently developed for use in bioengineering applications. These include 1) diffuse optical imaging methods for monitoring the depth of a drug as it is rapidly injected through the skin, without requiring a needle; 2) stretchy soft optical sensors for measuring strains of up to several 100 % during movement; 3) multi-camera image registration techniques for measuring the 3D shape and strain of soft tissues; 4) optical coherence tomography techniques for detecting the 3D shape of deforming muscle tissues, and 5) polarization-sensitive imaging techniques for classifying the optical and mechanical properties of biological membranes.
While these techniques sensors and techniques have been motivated by applications in bioengineering, the underlying principles have broad applicability to other areas of instrumentation and measurement.
Nowadays, scientists, researchers, and practical engineers face a previously unseen explosion of the richness and the complexity of problems to be solved. Besides the spatial and temporal complexity, common tasks usually involve non-negligible uncertainty or even lack of information, strict requirements concerning the timing, continuity, robustness, and reliability of outputs, and further expectations like adaptivity and capability of handling atypical and crisis situations efficiently.
Model-based computing plays important role in achieving these goals because it means the integration of the available knowledge about the problem at hand into the procedure to be executed in a proper form, acting as an active component during the operation. Unfortunately, classical modeling methods often fail to meet the requirements of robustness, flexibility, adaptivity, learning, and generalizing abilities. Even soft computing based models may fail to be effective enough because of their high (exponentially increasing) complexity. To satisfy the time, resource, and data constraints associated with a given task, hybrid methods, and new approaches are needed for the modeling, evaluation, and interpretation of the problems and results. A possible solution is offered to the above challenges by the combination of soft computing techniques with novel approaches of any time and situational modeling and operation.
Anytime processing is very flexible with respect to the available input information, computational power, and time. It is able to generalize previous input information and to provide short response time if the required reaction time is significantly shortened due to failures or an alarm appearing in the modeled system; or if one has to make decisions before sufficient information arrives or the processing can be completed. The aim of the technique is to ensure continuous operation in case of (dynamically) changing circumstances and to provide optimal overall performance for the whole system. In case of a temporal shortage of computational power and/or loss of some data, the actual operation is continued maintaining the overall performance “at a lower price”, i.e., information processing based on algorithms and/or models of simpler complexity provide outputs of acceptable quality to continue the operation of the complete system. The accuracy of the processing may become temporarily lower but it is possibly still enough to produce data for qualitative evaluations and supporting further decisions.
Situational modeling has been designed for the modeling and control of complex systems where the traditional cybernetics models haven’t proved to be sufficient because the characterization of the system is incomplete or ambiguous due to unique, dynamically changing, and unforeseen situations. Typical cases are the alarm situations, structural failures, starting and stopping of plants, etc. The goal of situational modeling is to handle the contradiction arising from the existence of a large number of situations and the limited number of processing strategies, by grouping the possible situations into a treatable (finite) a number of model classes of operational situations and by assigning certain processing algorithms to the defined processing regimes. This technique - similarly to anytime processing - offers a tradeoff between resource (including time and data) consumption and output quality.
The presentation gives an overview of the basics of anytime and situational approaches. Besides summarizing theoretical results and pointing out the arising open questions (e.g. accuracy measures, data interpretation, transients), the author enlightens some possibilities offered by these new techniques by showing successful applications taken from the fields of signal and image processing, control and fault diagnosis of plants, analysis, and expert systems. Some of the discussed topics are:
Scientific and industrial worlds have started recently to look again with interest to the basic rules to perform reliability, availability and safety analysis and design on complex electro-mechanical systems. The main failure modes on electronic devices and sensors as well as the main techniques for failure mode investigation are of interest in modern system design. Statistical characterization of the main probability density functions and degradation models of innovation is mandatory to build lasting and safe products. The main reliability design techniques such as: fault tree analysis, cut set method, minimal path approach, critical block analysis for reliability are requested by companies worldwide as well as the knowledge of the main failure modes and reliability databases and handbooks as MIL-HDBK217, OREDA, BELLCORE, etc… Maintenance policies with special attention to corrective and preventive ones are also affected by reliability design in terms of advantages and disadvantages when applied to electro-mechanical systems. The main safety standards as IEC61508, IEC 61511 and EN50129, EN50128, EN50126 are usually considered in industrial design. The aim of this talk is to enable companies to develop inner confidence on advanced modelling techniques involving reliability, availability and safe design. Under this spotlight in addition to traditional and well known statistical models, innovative modelling techniques based on statistical data representation will be introduced and tailored to some specific case studies in the fields of bio instruments, transportations and oil & gas contexts.
The convergence of healthcare, instrumentation and measurement technologies will transform healthcare as we know it, improving quality of healthcare services, reducing inefficiencies, curbing costs and improving quality of life. Smart sensors, wearable devices, Internet of Things (IoT) platforms, and big data offer new and exciting possibilities for more robust, reliable, flexible and low-cost healthcare systems and patient care strategies. These may provide value-added information and functionalities for patients, particularly for those with neuro-motor impairments.
In this talk the focus will be on: hardware and software infrastructure for neuro-motor rehabilitation; distributed instrumentation and communication standards; motor rehabilitation based on virtual reality and serious game; use of cloud computing for healthcare monitoring; use of mobile technologies for data storage data communication related to patients’ care; wearable sensor network integration with unobtrusive sensing technologies; Internet of Things technologies; data processing, data presentation that may assist healthcare professionals in objective, accurate assessment of patients’ motor activity and health status during daily activities; systems that support personalization of healthcare; systems that promote independent living and empower individuals and their families for self-care and healthcare management.
Technologies for unobtrusive measurement of patient posture and balance, patient’s muscles activation, movements’ characterization during neuro-motor rehabilitation will be presented and discussed during the talk. As part of these interactive environments, 3D image sensors for natural user interaction with rehabilitation scenarios and remote sensing of user movement, represented by Leap Motion Controller and Kinect, as well as thermographic camera for muscle activity evaluation will be presented. Instrumented daily used equipment for rehabilitation, such as smart walkers and crutches, force platform and wearable motor activity monitors based on smart sensors embedded in clothes and accessories for muscular activity monitoring by electromyography (EMG), force and acceleration measurement capabilities will be presented and discussed. Sensing technologies as part of smart tailored environments, such as piezo-resistive force sensors, e-textile EMG, microwave Doppler radar, MEMS inertial devices for motion measurement and optical fiber sensors will be presented in the context of IoT technologies, where RFID is used for smart object identification and localization in the augmented reality scenarios for therapy. Challenges related to simple and secure connectivity, signal processing, data storage, risk on data loss, data representation, data analysis including the development of specific metrics that can be used to evaluate the progress of the patients during the rehabilitation process will be discussed. Additional remote sensing technologies including thermography for training effectiveness evaluation will be also considered.
A network of physical things/objects, as part of smart environment, which is based on sensors and embedded platforms with Internet connectivity will collect and exchange data on monitored subjects under physical rehabilitation that may involve also the usage of serious games based on virtual and augmented reality. Training using these technologies may improve patients rehabilitation outcomes, may allow objective evaluation of the rehabilitation progress, early communication between health professionals, health professionals and their patients but also may support the research based on analysis of big data.
The world’s population is ageing fast. According to the United Nations the median age for all world countries will rise from 28 now to 38 by 2050. Also, is estimated that by 2050, the population over 60 years will increase worldwide from 11% to 22%, a higher percentage (33%) of elderly population will be in developed countries. In this context, governments and private investors, in addition to work for increase efficiency and quality of healthcare, are searching for sustainable solutions to prevent increase expenditure on healthcare related with higher care demands of elderly people. As such, instrumented environments, pervasive computing and deployment of a seemingly invisible infrastructure of various wired and/or wireless communication networks, intelligent, real-time interactions between different players such as health professionals, informal caregiver and assessed people, are created and developed in various research institutions and healthcare system.
This presentation reviews the recent advances in the development of sensing solutions for vital signals and daily activity monitoring. Will be highlighted:
- Vital signals acquisition and processing by embedded devices in clothes and/or accessories (e.g. smart wrist worn) or in walking aids and transportation equipment such as walker or manual wheelchair. The strength and drawbacks regarding cardiac and respiratory assessment capabilities, the studies on cardiac sensing accuracy estimation and artefacts influence on cardiac function sensing through capacitive coupled electrocardiography, electromechanical film sensor and microwave Doppler radar ballistocardiography, reflective photoplethismography will be discussed. Blood pressure, heart rate variability and autonomous nervous system activity estimation based on virtual sensors included in wearable or object embedded devices will also be presented.
- Daily activity signals acquisition and processing through microwave motion sensor, MEMS inertial measurement units, infrared multi-point and Laser motion sensors. Acquisition and conditioning of signals for motion assessment and theragames based on motion sensing and recognition will be presented. Using a set of metrics that are calculated using the information delivered by the unobtrusive sensors for motion capture, objective evaluation of rehabilitation session effectiveness can be performed. Several methods for diagnosis and therapy monitoring, as time frequency analysis, principal component analysis and pattern recognition of motion signals with application to gait rehabilitation evaluation will described. The work under project Electronic Health Record for Physiotherapy promoted by Fundação para Ciência e Tecnologia, Portugal, for developing serious games for physiotherapy based on Kinect technology will be presented.
Concerning the embedded processing, communication and interoperability requirements for smart sensing devices a critical analysis of the existent solutions and a proposed innovatory solutions are discussed. Special attention is granted to wireless sensor network, M2M and IoT as so as to ubiquitous computing particularly smartphone apps applications for healthcare. A fast prototyping vital signs and motor activity monitor as so as the usage of IEEE1451.X smart sensor standards for biomedical applications are included in the presentation.
The creation of novel smart environments including remote vital signs and motor activity monitoring devices for health monitoring and physiotherapy interventions promote preventive, personalized and participative medicine, as in-home rehabilitation that can provide more comfort to the patients, better efficiency of treatments, and lower recovery periods and healthcare costs. The use of unobtrusive smart sensing and pervasive computing for health monitoring and physiotherapy interventions allow better assessment and communication between health professionals and clients, and increase likelihood of development and adoption of best practice based on adopting recognized research-based techniques and technologies, and sharing knowledge and expertise.
The tutorial will focus on sensor and measurement systems for new generations of vehicles with driver-assisted/autonomous capability. This is the main trend that is revolutionizing vehicles and mobility of people and goods and is also making smart our cities. The economic and social impacts of this application field are huge. Worldwide every year 90 million vehicles are sold, but 1.25 million people are killed due to lack of safety. In the US 3.1 billions of gallons of fuel are wasted due to traffic congestion. Assisted driving and autonomous driving aim at increasing safety, at improving fuel efficiency and our lifestyle by avoiding traffic congestion, at ensuring mobility for elderly and disabled people (inclusivity). The interest in this research subject is demonstrated by the huge investments of companies like Google, Intel, Tesla, Uber, Ford, GM, to name just a few, and by technology alliances, e.g. between BMW and Intel, planning autonomous cars for 2021. A convergence between automotive and ICT/Electronics industry is foreseen in the near future. An example of this convergence is the 5G Automotive Association http://www.5gaa.org/, which includes all main cars’ manufacturers, telecom service providers, electronic industries, measurement system providers (Keysight, Rohde&Schwarz).
The key enabling technologies for this scenario are the sensing and measurement systems, needed for the accurate vehicle positioning and navigation, for vehicle context-awareness, obstacle detection, and collision avoidance, for driver-assistance (enhanced vision, driver’s attention and fatigue detection).
The lecture will be divided into multiple sections.
First, in the Introduction, innovation and market trends in the field of sensor and measurement technologies applied to vehicles and smart mobility systems will be discussed, focusing on next generation of driver-assisted/autonomous vehicles.
Then, new Radar and Lidar systems, appearing on-board vehicles beside an array of imaging cameras, will be discussed for measurement of obstacle positions, distance, and relative speed. A trade-off has to be found between the power and size of active sensing systems like Radar and Lidar and their maximum measurement range. Moreover, in continuous wave Radars, the limited frequency sweep range and the limited number of TX/RX channels lead to limits for the resolution in distance, direction of arrival, and speed measurements. Examples of X-band mobility surveillance Radar and mm-wave automotive Radar will be provided.
On the other hand, MOEMS (micro opto electro mechanical systems)-based scanned systems, used to reduce size and cost of Lidars are causing distortions that are worsening the accuracy of light-based measurements. Distortions due to fish-eye lenses, used to enlarge the field-of-view, are decreasing measurement performance of imaging sensors. Techniques to mitigate such artifacts will be discussed.
Practical examples of traffic sign recognition systems, road signs recognition, image mosaicking for all-around view will be discussed. In addition, Lidar and imaging cameras suffer from decreased measurement performance in case of harsh operating conditions (e.g. bad weather or light conditions).
New biometric sensing and measurement systems will be also reviewed, such as Radar-based contactless heart/breath-rate measurement, smart steering-wheel for skin temperature/galvanic-response measurements or heart-rate detection, with the final aim of detecting the driver’s attention or health status.
Concerning on-board sensors for positioning and navigation, recent advances in MEMS accelerometers and gyroscope will be discussed. A careful analysis will be carried out about the measurement errors they cause on position and navigation, due to their bias and random walk output noise.
Finally, the lecture will analyze the trend in computing platforms, where parallel architectures and machine learning/AI (artificial intelligence) techniques, will be exploited to manage in real-time many and heterogeneous sources of measurements and to make autonomous decisions.
Suggestions for future directions of interest for the I&M Society, and references to recent publications on IMS journals and conferences, in the field of automated and connected vehicles, will be provided as a conclusion.
This three-part talk series deals with challenging problems of the modern hi-tech manufacturing industry (electronic and memory products), such as (1) Screening for Reliability, (2) Detecting Systematic Defects, and (3) Test for Yield Learning. The offered solutions conform to the systematic data-driven Six Sigma methodology which is based on setting extremely high objectives, collecting and deep analysing comprehensive production data with the aim to defect elimination toward the level below the six standard deviations between the mean and the nearest specification limit in any process. In particular, the following successfully developed real-world industry-originated projects will be discussed and generalised for extended implementation and application of the obtained results:
1. Eliminating the Burn-in Bottleneck in IC Manufacturing
Reliability screening is one of several types of testing that are performed at different stages of the IC manufacturing process. It plays an important role in controlling and ensuring the quality and consistency of integrated circuits. One of the most popularly used forms of reliability test is burn-in testing (i.e., accelerated testing performed under elevated temperature and other stress conditions). Burn-in is normally associated with a long test time and high cost. As a result, burn-in testing is often a bottleneck of the entire IC manufacturing process, limiting its throughput. It is no surprise, therefore, that much attention and efforts have been dedicated towards possible reduction or even elimination of the burn-in testing.
This presentation offers a step-by-step methodology for the burn-in test time reduction of up to 90% based on the extended use of the High-Voltage Stress Test (HVST) technique. The Weibull statistical analysis is used to model the infant mortality failure distribution.
2. Defect Cluster Recognition for Fabricated Semiconductor Wafers
Many systematic failures in the wafer fabrication (so-called frontend process) can only be caught during the IC manufacturing (i.e., during the backend process). Thus, there is a need for a simple yet accurate system to perform a wafer defect cluster analysis based on fast knowledge extraction from the production test data. The talk will cover the design and development of an automation tool to carry out this task - Automatic Defect Cluster Analysis System (ADCAS). It is aimed at supporting the backend initiated efforts, such as defect root-cause identification, die-level neighbourhood analysis as well as yield analysis and improvement. It is suitable for a plug-and-play type application on semiconductor production databases while providing an excellent trade-off between the simplicity of implementation and high accuracy of the analysis.
3. Automatic Media Inspection in Magnetic Memory Drive Production
In the modern high-volume hard disk drive production process, if an assembled product fails the final test it is normally not discarded, but instead, it is sent for so-called Teardown. There it is disassembled to the constituent components. These components are thoroughly examined and retested for their individual functionality. If found to be in good operational condition, they are redeployed in new products. To retest the magnetic disk (or media) the Laser Doppler Vibrometry (LDV) has been traditionally employed. Unfortunately, LDV test is normally lengthy thus causing a bottleneck in the Teardown, and thus reducing the overall manufacturing efficiency. In order to address the problem, manual visual inspection is often performed as a preliminary filtering step. Such an arrangement is not optimal as it is open to the human error factor. It still could be costly and has throughput limitations.
In this part of the talk series, the factors influencing successful and rapid image acquisition of micrometer level defects on a specular surface are explored, namely, camera spatial resolution, spectral properties, image system Signal-to-Noise Ratio and lighting methods. A detection, as well as classification scheme, is offered to classify four major types of commonly occurring media defects.