28.1 Dynamic signals can vary in both the time and frequency domains The essence of any dynamic signal is that when we measure it at different points in time, the measured value changes. If that varying signal is plotted over time, a waveform is the result. Most dynamic signals of interest have the important property of periodicity – the waveform increases and decreases in value according to some regular cycle. There are three basic attributes of any time-varying signal (Figure 28.1): 1. Amplitude – The greatest value the signal achieves as it varies over a cycle. 2. Frequency – The time it takes to complete one cycle. 3. Phase – The state a system is in at any point in time. When comparing two otherwise identical repeating signals (e.g. sine waves) that are just offset from each other in time, we say they are ‘out of phase’ with each other. As shorthand, sometimes phase is recorded simply as the degree of offset between the two signals, usually measured from the origin. Each of these properties can be analyzed, in different combinations, to provide three different views of a dynamic signal. Each view reveals different aspects of the process producing the signal, and it can be used to reason about what is happening to the process at any point in time and what may be done to keep it well controlled. The time domain provides the first and most natural view of a dynamic signal, by recording simply how a measured signal varies over time. To obtain the second view of a signal we note that although most waveforms have a complex shape, for practical purposes any such complex can be deconstructed into a set of simpler component sine waves, a discovery made by the famous French mathematician Fourier (1768–30). Each such component sine wave has its own intrinsic frequency. Understanding this allows us to unpack any waveform into its component sign waves and record their unique frequencies. This frequency ‘fingerprint’ of a waveform provides us with our second view – the frequency domain – a time- and phase-free view of the signal (Figure 28.2). The mathematical process for converting a waveform into the frequency domain is known as a Fourier transform. The third view typically used is of the changing phases of a signal over time – its phase portrait. In this time- and frequency free-free view, we graph the different values a system takes over repeated cycles. Consider, for example, a frictionless pendulum as it swings toward and then away from a central or zero point (Figure 28.3). In the time domain, it looks like a sine wave. In the frequency domain, we see a single frequency band, associated with that sine wave. In the phase portrait, the pendulum’s different states are captured by graphing its distance from the middle point and its velocity at that point. At some part of its cycle, it is moving toward the middle point, and at the opposite end of the cycle it moves away. The resulting phase portrait is circular, describing this ever-repeating motion. In a world with friction, the pendulum loses energy. Its phase portrait spirals toward the zero middle point, as the pendulum slowly loses velocity, and displacement from the middle point, as it comes to rest. Examination of phase portraits is crucial to understanding the nature of a dynamic system. We can tell whether system states recur regularly, moving in a tightly defined band, and whether this recurrence has natural limits to the values the state can take. We can tell whether system states vary around a particular value – called an attractor. Independently of where a system starts in its phase space, its tendency is to be attracted toward the attractor value(s) (Figure 28.4). The portrait can also help tell whether a dynamic system is chaotic, which means both that initial conditions have a significant effect on final outcome, and also that it eventually traverses all its operating space at some point in time. The ability to separate a complex dynamic signal into time and frequency domains, as well as into phase space, is crucial to informing our understanding of the processes that generate a signal. It is also significant because we can use time, frequency and phase information to help us monitor changes over time and interpret the changes that are observed. 28.2 Statistical control charts are a way of detecting when a dynamic system moves from one state to another Most patients’ tests, such as a blood test or blood pressure reading, are taken only sporadically. If we were to plot them in time, the graph would be very sparse. In a patient monitoring setting, the same measures are taken repeatedly, often for practical purposes continuously, thus producing a dense time line. For sporadic measures, such as blood test for cholesterol, it is typical to define a normal or abnormal value based on the population distribution of the measure in question (often normally distributed). Any reading within two standard deviations of the population mean is considered normal, and beyond that we define the test result as abnormal. In the data-dense domain of patient monitoring, the task is a little different. Because we are gathering many data from a single individual, we are now able to compute statistics on what is ‘normal’ for that the individual alone. For example, we can repeatedly sample a measure over time, and as long as we are happy that the measure is stable, we can calculate what is called a patient-specific normal range (Harris et al., 1980). This range allows us to detect when a new value falls out of the patient’s own normal distribution of results. It is a good way of determining whether anything significant has changed significantly from previous measures over weeks or months. Importantly, a value in the patient-specific range does not mean that the patient is well, but only that the patient is stable. The next feature of time-dense data is that we see much more moment-to-moment variation. The time plot of very normal patient measures thus is anything but a straight line. If we monitor such dynamic physiological signals, it is important to separate two sources of signal variation: • The first is intrinsic or natural to any stable system that is being monitored and is often called common cause variation. Thus, a heart rate varies with level of exertion, body position and sleep. Blood pressures also vary over a day for similar reasons. Variation may also occur from noise on the signal, small variations in how a measurement is taken or indeed changes to the measurement system itself, such as different length leads, replaced skin transducers and so forth. • New events can be imposed on top of a stable system, or the system can shift from normal to abnormal, with a resultant deviation in performance. Such changes are sometimes called special cause variation. For example, a patient may move from normal cardiac function into heart failure, with resultant significant changes in the patterns and values of cardiac function metrics. Equally, a pulmonary embolus would be a new event that would immediately cause changes to a patient’s physiology. When the monitoring task is to detect special cause variation (as opposed to finegrained analysis of a signal), a standard approach is to build an SPC chart for the signal you wish to track. In the data-dense domain of patient monitoring, the task is a little different. Because we are gathering many data from a single individual, we are now able to compute statistics on what is ‘normal’ for that the individual alone. For example, we can repeatedly sample a measure over time, and as long as we are happy that the measure is stable, we can calculate what is called a patient-specific normal range (Harris et al., 1980). This range allows us to detect when a new value falls out of the patient’s own normal distribution of results. It is a good way of determining whether anything significant has changed significantly from previous measures over weeks or months. Importantly, a value in the patient-specific range does not mean that the patient is well, but only that the patient is stable. The next feature of time-dense data is that we see much more moment-to-moment variation. The time plot of very normal patient measures thus is anything but a straight line. If we monitor such dynamic physiological signals, it is important to separate two sources of signal variation: • The first is intrinsic or natural to any stable system that is being monitored and is often called common cause variation. Thus, a heart rate varies with level of exertion, body position and sleep. Blood pressures also vary over a day for similar reasons. Variation may also occur from noise on the signal, small variations in how a measurement is taken or indeed changes to the measurement system itself, such as different length leads, replaced skin transducers and so forth. • New events can be imposed on top of a stable system, or the system can shift from normal to abnormal, with a resultant deviation in performance. Such changes are sometimes called special cause variation. For example, a patient may move from normal cardiac function into heart failure, with resultant significant changes in the patterns and values of cardiac function metrics. Equally, a pulmonary embolus would be a new event that would immediately cause changes to a patient’s physiology. The general approach to creating an SPC chart is similar to other learning approaches reviewed in Chapter 27. First, historical data are used as a training set. Rather than learning specific relationships in the data, however, analysis is limited to determining any natural (common cause) variability in the signal and to estimate some statistical boundaries of stable behaviour. Once such a statistical framework is built, it can be used prospectively to monitoring the signal and detect when special cause variation occurs. SPC has found widespread application in healthcare and has been used to monitor physiological parameters of individual patients, through to surveillance of system-wide signals of health service performance (e.g. Tennant et al., 2007; Thor et al., 2007) (Table 28.1). A simple control chart consists of upper and lower bounds for a signal (its control limits) and a centre line typically based on the mean of past values (Figure 28.5). Control limits are often set at three standard deviations from the centre line, to allow for common cause variations. As long as future measures remain in the envelope of the control lines, the measure is said to be in control, or stable. There are many ways to calculate the centre line, including exponentially weighted moving averages (EWMA) and a cumulative sum (CUSUM) (Mohammed et al., 2008). If the mean or centre line is calculated on a past data set only, then the control and centre lines will be straight. If, however, they are constantly recalculated as new data arrive, then they will drift as new measurements come in. When a signal strays outside these control boundaries, this triggers a search for special cause variations that may need attention. The actual rule for triggering such an alert depends on the application and time available for recovery if an unexpected event occurs. Alerts may trigger after a number of measures repeatedly fall outside a control limit (to avoid triggering a false alarm from a transient variation), or they may trigger much earlier if there is clear movement of values toward the control limit (e.g. a trend line forms in the band between two and three standard deviations). Table 28.1 Example variables that can be tracked by statistical process control Biomedical and physiological variables • Cardiovascular metrics e.g. heart rate, blood pressure, central venous pressure. • Blood glucose and HbA1c. • Peak expiratory flow rates. • Urinary output. • Oxygen saturation. Biomedical instrumentation metrics • Error in blood pressure measurements. Other patient health variables • Patient fall rate. • Daily pain scales. • Days between asthma attacks. • Incontinence volume. • Nausea after chemotherapy. Clinical management variables Time to complete a process element • ‘Door to needle’ time (time from admission to thrombolytic therapy for acute myocardial infraction). • ‘Vein to brain’ time (time from a blood test being taken to a clinician reading the reported test result). • Average length of stay or mortality per patient diagnosis group in hospital, or in the intensive care unit. • Time from discharge to general practitioner receiving a discharge summary. Process event (and defect) rates • Compliance with defined clinical indicators of care quality, e.g. measure the blood pressure of hypertensive patients in primary care. • Percentage of stroke patients receiving a brain scan within 2 days. • Days since last infection for patients with central venous lines. • Number of operations since last complication. • Days since last adverse event in a unit. • Documentation of specific information items in the record, e.g. allergy, presenting condition. • Place in record where specific information items are documented e.g. free text versus coded field. • Deviations from protocol or guideline. • Monthly medication errors. • Out of hour “stat” blood test orders. • Monthly cases of MRSA. • Monthly admission rate for diarrhoea cases. • Number of diabetic patients having an HbA1c test. • Mortality after coronary artery bypass graft. Clinical decision-making • Number of patients with tonsillitis and without tonsillitis who were receiving antibiotics. Patient experience • Patient satisfaction or complaints. • Staff ratings. • Quality rankings for process of care. Financial resources • Average cost per procedure. • Staff cost per shift. • Number of support staff versus providers. HbA1c, glycosylated haemoglobin; MRSA, methicillin-resistant Staphylococcus aureus. After Thor et al., 2007. Using knowledge about a signal in the frequency and phase domains as well as about system structure allows tighter control boundaries to be determined SPC charts are nearly model-free representations of system behaviour, given that they make very few explicit assumptions about the mechanics of the system observed. For example, there are not necessarily even assumptions about the statistical distribution of the measures as they are tracked (e.g. normal, Poisson). As such, it should be possible to create tighter boundaries around a signal if we knew more about the underlying causes of variation being observed. One way to model a time-varying signal better is to explore it in frequency domain and phase space. Frequency domain analysis breaks a complex time-varying signal into its separate time-varying components. Each component could have its own SPC chart, and if we understood the separate underlying process that creates each component, control limits could be set accordingly. For example, heart rate varies over 24 hours because of wake and sleep cycles. On the small scale, heart rate varies from beat to beat because of the physiological mechanisms of the myocardial pacemaker system. The phase portrait of a complex waveform, or portraits of its frequency components, can tell us which is stable or unstable and can help in decisions about setting trigger rules around the control limits. 28.3 Signal processing and interpretation occur at different levels of signal interpretation and require increasing amounts of clinical knowledge For many patient monitoring tasks, the complex nature of the physiological signal such as the ECG requires additional preparation of the signal before it can be analyzed. Additionally, signal interpretation can extend well beyond that achievable using standard SPC, which as we have just seen, is almost model-free. This process of signal interpretation can occur at a number of levels, starting with signal acquisition, and a lowlevel assessment of the validity of the signal, through to a complex assessment of its clinical significance. The different levels of interpretation that a signal may pass through are illustrated in Figure 28.6. Sensors interact with a physical system to generate a signal Physical sensors (or transducers) first detect a physical process and turn it into a signal. For example, the leads attached to skin are the sensor component of the ECG measurement system. Sensors are designed to respond in some way to the physical system being measured (a pressure, a temperature or blood concentration of a substance), and that response is converted into an electrical signal. For a continuous signal such as a pressure wave, we need to optimize the sensor design so that the sample it takes produces a reliable picture of the changes that are occurring within the physical system (much like designing the sample size and composition of an experiment so that it is representative of the population being studied). The first process of interest is analogue to digital conversion (ADC). Although a sensor is typically a physical device that is able to detect the changes in a dynamic system continuously, digital computers see the world in discrete chunks. An AD converter takes a continuous waveform and re-represents it as a sequence of digital numbers. The precision of ADC describes the granularity of this transformation. If, for example, only two bits are available (zero and one), then all we can do is represent two signal levels – enough, for example, to detect when a peak occurs (Figure 28.7). As we increase the number of bits available, a more detailed characterization of the signal is possible. The next issue to consider in the fidelity of our digital representation of a continuous signal is the sampling rate. As the process of ADC takes repeated discrete snapshots of a continuously changing signal, the question is how often these snapshots must be taken. Recalling that a complex continuous waveform can be deconstructed by Fourier transform into its components, we identify the highest-frequency component because this will require the greatest number of samples. The Nyquist rate tells us that the minimum number of snapshots we have to take of a signal with a frequency f must at a minimum be 2f. Anything less is undersampled and will not capture the true frequency of the signal. In Figure 28.8, we can see that sampling a sinusoidal wave at its frequency f yields a straight line. At 2f we obtain enough information to see the true cyclical nature of the signal, and as we sample at rates higher than 2f, a richer picture of the shape of the waveform is obtained. Aliasing is an interesting phenomenon that occurs when undersampling. Instead of capturing a waveform at its true frequency, we recover an alias of that signal, which has a lower frequency (e.g. sample rate of 1.8f in Figure 28.8). By picking the right ADC precision and sample rate, we are able to adjust our sensing process to obtain as faithful a representation of the continuous process we are measuring as needed. Sampling error is the difference between the actual and estimated signal. Signal processing is used to eliminate noise and artefact in a signal The next task in signal interpretation is to decide whether the values that are measured are physiologically valid. In other words, is the signal genuine, or is it distorted because of excessive noise resulting in a low signal to noise ratio? Alternately, is it distorted by a signal artefact arising from another source than the process being monitored, such as patient movement? A signal artefact is defined as any component of the measured signal that is unwanted. It may be caused by distortions introduced through the measurement apparatus. Indeed, an artefact may result from another physiological process that is not of interest in the current context, such as a respiratory swing on an ECG trace. Thus, ‘one man’s artefact is another’s signal’ (Rampil, 1987). Where possible, a noisy signal is ‘cleaned up’ by removing the artefactual or noise components of the signal. Doing so is important for several reasons. First, an artefact may be misinterpreted as a genuine clinical event and lead to an erroneous therapeutic intervention. Next, invalid but abnormal values that are not filtered can cause alarm systems to register false alarms when alarm limits are reached. Finally, artefact rejection improves the visual clarity of a signal when it is presented to a clinician for interpretation. There are many sources of artefact in the clinical environment. False heart rate values can be generated by diathermy noise during surgery or by patient movement. False high arterial blood pressure alarms are generated by flushing and sampling arterial lines (Figure 28.9). These forms of artefact have contributed significantly to the generation of false alarms on patient monitoring equipment. One early study found that only 10 per cent of 1307 alarm events generated on cardiac postoperative patients were significant (Koski et al., 1990). Of these, 27 per cent were due to artefacts, e.g. sampling of arterial blood. The net effect of the distraction caused by high false alarm rates has been that clinicians have often turned off alarms intra-operatively, despite the increase in risk to the patient. Although an artefact is best handled at its source through improvements in the design of the physical transducer system, it is not always possible or practical to do so. The next best step is to filter out artefactual components of a signal or register their detection before using the signal for clinical interpretation. Many signal processing techniques have been developed to assist in noise reduction. Some artefacts, such as those caused by sampling and flushing a blood pressure catheter line, can be detected by their unique shape (see Figure 28.9). Other artefacts can be managed by using Kalman filtering which computes a weighted average of the signal over a period of time and as a result smoothes out the effects of random and transient noise in the signal. As we saw earlier, a Fourier transform can deconstruct a complex time-varying signal into a series of sine waves of different frequencies and allow us to manipulate a signal in the frequency domain. If noise is known mainly to distort a signal in certain parts of its frequency spectrum, then only that part of the spectrum can be attenuated or completely filtered out. A low-pass filter would eliminate components in the lowfrequency range, to some cut-off point, and a high-pass filter achieves the reverse. A signal can then be reconstructed in the time domain and should now be much cleaner and better represent just the physiological measure we are after. It is also possible to analyze frequency components of a signal to obtain information about the performance of the measurement system. When measuring a waveform such as arterial pressure, the signal may be overdamped, meaning the high-frequency signal components are attenuated compared with lower frequencies (similar to a sound wave being muffled in a padded room) (Figure 28.10). Damping may be caused by problems in the pressure measurement system, such as tubes that are too long, or air bubbles that are trapped in them. An overdamped blood pressure measurement underestimates systolic pressures and overestimates diastolic pressures. A pressure measurement can also be underdamped (similar to hearing sounds in a tiled room), in which highfrequency components are enhanced compared with lower frequencies. Overdamping overestimates systolic pressures and underestimates diastolic pressures. Analysis of a pressure signal in the frequency domain can detect these higher than normal frequency components for an underdamped system or lower than normal components in an overdamped system. Multiple features can be extracted from a single channel to support behavioural interpretation Having established that a signal is probably artefact free, the next stage in its interpretation is to decide whether it defines a clinically significant condition. This may be done simply by comparing the value with that of a pre-defined patient or population normal range, or using SPC lines. In most cases, simple thresholding is of limited value because clinically appropriate signal ranges can be highly context specific and require a richer model to interpret signal meaning. The notion of an acceptable range is often tied up with expectations defined by the patient’s expected outcome and current therapeutic interventions. 

Do you have a similar assignment and would want someone to complete it for you? Click on the ORDER NOW option to get instant services at essayloop.com. We assure you of a well written and plagiarism free papers delivered within your specified deadline.