Short-Duration Characterization of Source Emissions for Use in Predictive Software Models to Assess Worker Exposure: A Note of Caution

This article reports on use of advanced Near-Field—Far-Field software for assessing shortversus long-duration data obtained minute-by-minute at two distances from a small source of an evaporating solvent located in an isolated subsurface structure (a type of confined space) accessed through a manhole containing one or two opening(s). The software uses this data to predict worker exposure to airborne chemical substances. Initial flashoff of volatile components was readily visible in graphs prepared from some tests and especially so in initial output from the calibration utility contained in the modelling software. The calibration utility orients the mathematics of the software to measured data. The calibration utility indicated constant magnitude from longer-duration emissions consistent with constant composition. Source characterization of emissions from solvents containing multiple ingredients and constant initial mass deserves careful consideration because initial emissions may not represent overall behavior. This situation indicates the potential to bias predictions of worker and other types of exposure utilizing the same mathematics. This is especially the case during source characterization using measurements of short duration. This study advocates for further investigation to develop guidelines for source characterization during use of modelling software that minimize the potential for error in exposure assessment.


Obligation of Employers to Assess Exposure to Hazardous Substances in the Workplace
One of the most important obligations imposed by regulators on employers is the requirement to determine exposure of workers to hazardous substances present or that may arise in the atmosphere of the workplace. OSHA (US Occupational Safety and Health Administration) requirements in this area provide one such example (OSHA, 2020).
Much has been written about strategies for determining exposure (AIHA, 2015). The physical component of the process for chemical substances typically requires measurement utilizing a method capable of detecting and quantifying the amount in a given volume of air (Schlecht & O'Connor, 2003). In the case of worker exposure, determining the amount typically requires placement of a measurement device in the breathing zone (Lynch, 1994) and collection of sufficient data for statistical reliability (AIHA, 2015). This aspect of the process is cumbersome and often involves the commitment of considerable time and resources.
As a result, there is considerable incentive to seek alternate methods for achieving the same endpoint. These methods can involve examination of existing data as well as experimentation. The foundation underlying alternate methods is the development and application of mathematical models (AHA, 2009). Modelling is the process of describing observations to enable prediction of observed events. Mathematical modelling involves application of techniques to derive equations from numerical data. One application of the process is reconstruction to predict past exposure. Another application is to predict present and future exposure. Mathematical models therefore provide the ability to predict conditions, past, present and future. This capability obviously is very powerful and offers great benefit to practitioners when used appropriately and with due care and diligence.

The Well-Mixed Room
The basis of many models in industrial/occupational hygiene is the 'Well-Mixed Room' (AIHA, 2009;Burgess et al., 1989). The Well-Mixed Room is analogous to the well-mixed reactor in chemical engineering (Levenspiel, 1993). Equation 1 shows shows the generalized mathematical model that describes accumulation and dissipation in a Well-Mixed Room (AIHA, 2009;Burgess et al, 1989). Accretion is the process through which the concentration of contaminant increases over a period of time and dilution, the process by which the concentration decreases. Equation 1 contains terms for concentration (C 1 and C 2 ) at different times (t 1 and t 2 ), rate of generation (G) in the space during the period of measurement, volumetric flow of dilution air (Q), and volume of the space (V), all in consistent units. The relative values of G and Q determine the value of C 2 namely increasing, stable or decreasing with time.
[ ] Where (consistent units) C 2 is the concentration at time t 2 ; C 1 is the concentration at time t 1 ; G = generation rate; Q is the flow rate through the opening; t 1 = time 1; t 2 = time 2; V = volume of the space.
Variations of Equation 1 examine conditions in which C 1 = 0 at t 1 = 0, G = 0, and the equilibrium condition in which the concentration of contaminant does not change.
While Equation 1 seems straightforward, application in a way that provides meaningful results during workplace and other types of activity presumes satisfaction of a number of requirements. Understanding these requirements is essential for application of the equation in exposure situations involving industrial and other environments. The equation as expressed above offers no insight about limitations and conditions that must be met. Reference sources that provide this equation do not always provide these limitations or state them in the detail needed for application in real-world environments.
Requirements most likely to be expressed in reference sources indicate that dilution air at volumetric flow (Q) entering the space contains negligible contamination and that generation of contaminants in the space (G) occurs at constant rate. Requirements less likely to be expressed are that air exchange occurs through defined path(s) and that mixing of the contaminant with air in the space and incoming uncontaminated air at volumetric flow (Q) occurs completely, thoroughly and rapidly. That is, the concentration of contaminant is uniform throughout the space at the two times, t 1 and t 2 . This means that concentrations, C 1 and C 2 are really averages. The quantity Q/V is the same as air exchanges per unit of time. Hence, this links the intuitive concept to the mathematical one. Subsequent discussion will reveal additional requirements and limitations. Even with this revelation, there remains the possibility of further limitations and requirements not yet identified.

Near-Field-Far-Field Models
Many industrial spaces do not meet the requirements for rapid and thorough mixing because of inadequate ventilation processes or insufficient supply of air to cause turbulent mixing. In these cases, the concentration of contaminant decreases with distance from the source rather than being uniform throughout the space as required for application of the model of the Well-Mixed Room. Application of the model of the Well-Mixed Room would be inappropriate in these situations.
Investigators have devised other mathematical models based on controlled experimental situations. One model that can accommodate data recorded at two distances from a source is the Near-Field-Far-Field Model (AIHA, 2009;AIHA, 2018a;AIHA, 2018b;EASInc., 2020;Hewett & Logan, 2019). The basis of the Near-Field--Far-Field Model is two imaginary structures, one nested within the other. The Near-Field structure bounded by imaginary walls contains the source. The Far-Field structure could potentially contain the finite boundaries of the structure or possibly imaginary boundaries nested within the finite boundaries of the structure. In the latter case, the remainder of the airspace is situated outside the imaginary boundaries of the Far-Field structure. This could include exchange of external air with air in the structure and mixing with contaminated air located outside the boundary of the Far-Field. AIHA (2009) and Hewett and Logan (2019) provide illustrations of these structures.
Influencing points intrinsic with the latter version of the model are the shape and size of the Near-Field and Far-Field relative to the shape and size of the structure and whether these parameters have importance in application of the model in real-world situations (AIHA, 2018a;AIHA, 2018b;EASInc., 2020;Hewett & Logan, 2019). Exchange of clean air with contaminated air in the structure occurs in the outer airspace. The Far-Field exchanges air with the outer airspace and the Near-Field. The Near-Field in turn exchanges air with the Far-Field and receives contamination from the source. IH Mod (Industrial Hygiene Models) 2.0 published by the American Industrial Hygiene Association incorporates models based on the Well-Mixed Room: a spill model, constant and decreasing emission models; eddy diffusion models; two-zone (Near-Field-Far-Field) models; and near-and mid-field plume models (AIHA, 2018a). IH Mod contains inputs for Near-Field boundaries of different size and shape. The IH Mod 2.0 Support File contains additional programs for calculating generation rate and parameters used in programs in IH Mod 2.0 (AIHA, 2018b). This software is freely available and is gaining considerable use through promotion in courses. Use of software to estimate exposures past, present, and future confers considerable advantage but entails considerable risk when end-users do not fully understood limitations intrinsic in the mathematics.
Use of mathematical models in real-world issues and applications raises several important questions not the least of which are limitations beyond those already mentioned. The first question relates to the appropriateness of the mathematics used in the model. The lack of systematic evaluation of the models in real-world situations is also an important factor (Arnold et al., 2017a). Arnold et al. (2017b) systematically evaluated the Well-Mixed Room and Near-Field-Far-Field Model. They conducted this evaluation under highly controlled conditions in an exposure chamber following ASTM Standard 5157 as a guide for experimental design and the AIHA Exposure Assessment criteria (AIHA, 2015;ASTM, 2014). Use of the exposure chamber permitted accurate control of all inputs. These authors collected more than 800 pairs of measurements of conditions in the chamber varied one at a time.
The Well-Mixed Room estimates met the ASTM performance criteria for 88% to 97% of the pairs across the three chemicals used in the study, and 96% for the AIHA Exposure Assessment criteria (AIHA, 2015; Arnold et al., 2017b). The Near-Field estimates met modified ASTM criteria for 67% to 84% of the pairs while 69% to 91% of Far-Field estimates met these criteria. Agreement with AIHA Exposure Assessment criteria occurred for 72% of the Near-Field pairs and 96% of the Far-Field pairs, respectively.
The authors concluded that performance of the Near-Field ̶ Far-Field model reflected the size of the chamber (2.0 m × 2.8 m × 2.1 m) (Arnold et al., 2017b). An important point not discussed in the article was the role of extreme turbulence created by fans in the exposure chamber. This level of turbulence was considerably greater than what is present in the majority of real-world workplaces (Baldwin & Maynard, 1998) and may represent a limitation of the experimental design in describing and evaluating conditions in them. Arnold et al. (2017a) also reported on assessment of exposure using measurements and application of the Well-Mixed Room and Near-Field--Far-Field Models under conditions in real-world workplaces. Evaluation of ten diverse exposure scenarios involving six different contaminants occurred at five workplaces. Personal time weighted average (TWA) exposure measurements were obtained on individuals performing tasks with the substances under study. Where possible, source samples were collected using direct-reading instruments. These authors reported that in many cases, estimation of input data occurred indirectly because of inaccessibility of sources. Monte-Carlo calculations were used to estimate the mean and the 95th percentile exposure from the distribution of modeled exposures. The mean and 95th percentile of the distribution of the TWA exposure measurements were used as the decision metric against which modeled exposures were compared.
Results from the different scenarios indicated variable adherence to the expectations intrinsic in the models (Arnold et al., 2017a) These resulted in under-and over-estimation of exposures. Occurrence of under-or over-exposure was not predictable a priori. Hence, the inconsistency was neither consistently an overestimation or underestimation of reality. The Near-Field-Far-Field Model slightly outperformed the Well-Mixed Room model because of the inclusion of additional information in the input. The authors also explained that models do not accurately predict or account for short-term peak exposures nor were they designed to do so. Real-world conditions considerably increased the uncertainty in application of these models. The authors concluded that this uncertainty occurred because of inability to evaluate emissions from sources, inability to control environmental conditions and inability to predict and control workplace activity.
Authors Ganser and Hewett (Ganser & Hewett, 2017a;Ganser & Hewett, 2017b;Hewett & Ganser, 2017a;Hewett & Ganser, 2017b) further exposed the limitations of the mathematics contained in Equation 1 and the limitations of the software models used by Arnold et al. (2017a) in assessment of real-world workspaces, and demonstrated the need for further sophistication in existing software models. The articles by these authors proposed additional equations for considerably increasing the appropriateness and capability of these models in evaluating conditions in real-world workspaces.
In the first article, Hewett and Ganser (2017a) reported that the standard Well-Mixed Room ('one-box') model involving a constant generation rate (G = constant) is inappropriate for predicting occupational exposures during use of controls such as local exhaust ventilation and partial air purification and recirculation. Controls are common in real-world workplaces. These include jets of air produced by dedicated fans, general supply and exhaust systems and local exhaust systems. These systems also can also recirculate filtered and unfiltered air.
The more advanced models proposed by Hewett and Ganser (2017a) for constant emission contain variations involving combinations of general ventilation with and without recirculation of filtered air, local exhaust, and local exhaust with the return of filtered air.
Ganser and Hewett (2017a) extended these considerations two-box (Near-Field-Far-Field) models. These authors explained that basic-level, two-box models are limited to scenarios where local controls are not used. The more advanced equations for two-box models presented in this article for constant emission permit real-world combinations of general ventilation with and without recirculation of filtered air, local exhaust, and local exhaust with the return of filtered air. The models also can accommodate steady-state and transient operation including cyclic repetitive and irregular emission. Additional variables introduced with the new models included efficiency of capture of freshly-generated contaminant and efficiency of filtration for return of filtered exhaust to the workspace. The new models also provide a structured procedure for calibrating the mathematics for the actual situation using measured data. The calibration procedure generates estimates for generation rate and the effective near-field flowrate.
Hewett and Ganser (2017b) also introduced advanced one-box models for decreasing emissions (G decreasing with time). These models also accommodate combinations of general ventilation with and without recirculation of filtered air, local exhaust, and local exhaust with the return of filtered air. The models also can accommodate steady-state and transient operation including cyclic repetitive and irregular emission. These models also provide a structured procedure for calibration to an actual situation using measured data. Ganser and Hewett (2017b) proposed advanced equations for two-box, well-mixed room, decreasing emission models. These models also accommodate combinations of real-world situations mentioned above.
The preceding review highlights the importance of mathematical models for estimating exposures and the enormous potential for software that can harness the power of the mathematics. At the same time, discussion about limitations of these models needed to foster discussion and consideration about suitability for use in a particular situation is slowly emerging although not stated fully in one place. This information is critically important to endusers so as to enable the best possible implementation in assessment of real-world exposures. The preceding review indicated a number of influences on the mathematics used to describe the atmosphere in the space: • mixing characteristics (poorly mixed, rapidly mixed, thoroughly mixed) • profile of generation (zero, constant or decreasing emission, steady-state, transient, cyclic, repetitive) • profile of air entering the space (uncontaminated, contaminated, filtered) • profile of flow in the space (once-through, recirculation with or without filtration) enrr.ccsenet.org Environment and Natural Resources Research Vol. 10, No. 2; • input data consistent with the time-frame of the predictions (the extrapolation question) • use or absence of local controls to remove contamination To this point, discussion has focused on investigations involving limited numbers of measurements. Arnold et al. (2017a) expressed caution about use of limited data or data obtained by calculation to predict the magnitude of emission sources for input into exposure assessment models. This article reports on investigation of the concern expressed by Arnold et al. using long-duration data from two fixed points measured minute-by-minute as inputs into software based on the advanced exposure assessment models mentioned above to identify further limitations and caveats. The chamber described by McManus (2016) and McManus and Haddad (2019a) to study a series of conditions related to ventilation of an isolated subsurface structure was used in this investigation.
Previous work performed by investigators at the Bureau of Mines (Jones et al., 1936) and McManus (2016)

Method
This work occurred in a suburb of Vancouver, Canada in the yard of a construction contractor using the chamber discussed in previous studies (   . Equipment used to measure evaporation of solvent. Equipment included two instruments containing PID sensors and two instruments for measuring temperature and humidity (not visible). The upper instrument defined the boundary of the Far-Field. The lower instrument defined the boundary of the Near-Field. Note that the geometric shape and size of the boundary surfaces are not intuitive and that the software offers several choices. The structure into which the instrument stand was inserted has a square cross-section and vertical orientation. The aluminum pie plate containing the paper towel onto which is poured the lacquer thinner is also visible.
The evaporating surface (a paper towel) positioned near the base of the structure contained 10 mL of well-aged lacquer thinner. The Material Safety Data Sheet indicated that the product contained 60% to 80% toluene, 10% to 20% MethylEthyl Ketone (MEK), 5% to 10% methanol and 1% to 9% acetone (Recochem, 2011). Acetone, methanol, and methanol are considerably more volatile than toluene based on Vapor Pressure (NIOSH, 2010). These components evaporate preferentially from the solution compared to the toluene.
The PID sensors in the instruments respond to toluene, MEK, and acetone (GfG, 2009). The sensors were calibrated using isobutylene according to instructions of the manufacturer and reported concentration in 'isobutylene units'. The dataloggers recorded measurements once per minute. The instruments operated until exhaustion of the batteries (at least 800 min).
Curve-fitting to establish the mathematical relationship between the points occurred using Microsoft Excel

Results
This investigation examined three conditions involving the manhole cover: single center opening (n = 10) and two openings, center + circumference (n = 6) and 2x circumferential openings (n = 6), respectively. This study occurred from June 10 to August 12. Weather conditions including temperature were similar during this period in the location of the study. These included sun, rain, very low level of wind, and daytime atmospheric temperatures in the high teens to low 20s C.
An initial small peak occurred almost immediately at the start of the process prior to the rise to a peak during some of the treatments (McManus & Haddad, 2019c). These initial peaks have important significance in the overall interpretation of the data reported here. Table 1 provides summary data for the arithmetic mean of data points for the various treatments.
Microsoft Excel provided the best fit of the data points as a 6th order polynomial. The y-term is the arithmetic mean concentration expressed in isobutylene units and the x-term (expressed as t) is the time that has elapsed since the start of measurement. The mathematics forced an intercept of zero (t 0 = 0) in all cases.
Substitution of values for time in the equations provides the basis for predicting concentration at the upper and lower positions.
Calibrate, one of the utilities in TEAS (Task Exposure Assessment Simulator), enables calibration of the mathematics in the model using observed data (elapsed time, concentration in the Far-Field, concentration in the Near-Field and mass of the substance available to evaporate, in this case 8000 mg (10 mL x 0.8 g/mL x 1000 mg/g) for decreasing emission.
Isobutylene units (ppm of C 4 H 8 ) are directly translatable into concentration of toluene assuming that the signal is attributable only to toluene as might occur during a real-world investigation to determine exposure during evaporation of solvent. The very early peak that appeared in some tests (mentioned in previous discussion) is consistent with evaporation of the more volatile components (MEK, acetone and methanol) in the mixture and those detectable by the PID sensor (toluene, MEK and acetone) (GfG, 2009;McManus & Haddad, 2019c).
Conversion of the level of the signal in isobutylene units (ppm) to toluene (ppm) requires multiplication by 0.53, the ratio of the concentration of isobutylene to the same concentration of toluene measured under the same conditions (GfG, 2009). Conversion of ppm levels to mg/m 3 levels is required for further use of the mathematics in the models. Conversion of ppm to mg/m 3 is given by (mg/m 3 ) = (ppm)(MW)/24.45 where MW is the molecular weight (NIOSH, 2010). The value of 24.45 L is the molar volume of a perfect gas at 25° C (298° K). The molar volume at 13° C, the temperature of the airspace in the bottom of the chamber at which much of this work occurred was 23.47 L. The latter value was used in calculations.
Calibrate, a utility in TEAS, provides an estimate of Q (exchange rate in the structure), β (ventilation rate of the near field at the distance of measurement from the evaporating surface and the Mass of substance that evaporated and Generation rate (Table 2). Adjusting the estimated mass provided by the mathematics to as close as possible to the known mass of 8000 mg forced modification of the other calculated values. Final adjusted values of M, the mass of liquid applied to the paper towel were very close to the starting value of 8000 mg. The amount of adjustment between the initial calculated value and the final adjusted value sometimes was considerable, an indication that the expectation of the underlying mathematics differed considerably from the reality of the measured values.
Values of Q, the volumetric exchange rate of the airspace represented by the Far Field and β, the volumetric exchange rate of the Near Field rose rapidly from zero and decreased rapidly from initial values to final almost constant values at 400 min and 300 min, respectively (Figure 2 and Figure 3). Greatest initial variation in Q and β occurred when two openings were present in the manhole cover. There was little if any difference in the behavior for the different geometries of the two openings (center + circumference versus 2 x circumferential openings). Hewett and Logan (2009) indicated that the concentrations in the Near-Field and Far-Field are related to the Generation Rate (G) and the ventilation rate of the Near Field, β, by G = β (C NF -C FF ). As in the situation involving data in Table 1, conversion from ppm levels to mg/m 3 levels by multiplication (92/23.47) is required for further use of the mathematics in the models. G in Table 2 pertains to rate of evaporation of liquid.   Figure 4 shows the composite calculated behavior of Generation rate at the evaporating surface for the groups tabulated in Table 2. The graphs overlap completely. This overlap occurred despite differences in measured concentration of vapor reported in Table 2, treatment of the manhole cover (single opening, two openings [center + circumference, 2x circumference]), and weather conditions. Inputs include Q, the ventilation rate for the space; M 0 , the initial mass of the evaporating liquid; α, the rate of evaporation of the liquid; and S, the random air velocity. While IH Mod 2.0 offers models for Near-Field-Far-Field relationships, the mathematics cannot accommodate to the complexities of real-world environments as discussed previously. As a result, IH Mod 2.0 was not considered further.

Discussion
This study provides insight into use of data generated minute-by-minute over a long duration at two fixed points having a Near-Field-Far-Field relationship to a non-replenishing, evaporating source of lacquer thinner (a product containing toluene, MEK, methanol and acetone) in a structure known to satisfy the requirements of a Well-Mixed Room (McManus, 2016;McManus & Haddad, 2019a). This configuration provided a rare opportunity to study the behavior of the evaporation process over the period of 800 min and three separate ventilating conditions (single opening in the center of the manhole cover and two openings, center + circumference and circumferential only).
This study occurred during a period of relatively repeatable conditions (June 10 to August 12). Controllable factors of potential importance in this part of the discussion were the actions taken in the procedure. These were repeated without change during every test. The number of opening(s) in the manhole cover and their geometric relationship were the only controllable variables. External temperature and air motion along the ground and temperatures at different levels inside the structure were not controllable. Temperature near the bottom of the chamber varied little during the period of measurement (McManus, 2016).
An initial small peak occurred almost immediately at the start of the process prior to the rise to the main peak during some of the treatments (McManus & Haddad, 2019c). Similarly, the ratios of the lower level values to the upper level values for the arithmetic mean composite curves decreased rapidly from initial values around 3.5 to around 2.2. Initial decrease from a high level to a minimum over the period of 25 min occurred in each of the experimental treatments. Afterward the ratios increased gradually to broad peaks with height ratios of 2.74 for the single opening in the center of the manhole cover, 3.09 for two openings, center + circumference, and 3.16 for two circumferential openings, respectively. The height ratios then decreased gradually to a value around 2.0. The fact  that the height ratio behaved in a certain way does not insinuate that the values from which the ratio was calculated necessarily remained constant.
Availability of the large collection of data points obtained in a fixed geometry over long periods provided the opportunity to investigate the influence of evaporation (source characterization) on pairs of measurements (Near-Field and Far-Field) using the Calibrate utility in the TEAS software. The Calibrate utility forced the mathematical models incorporated into TEAS to accommodate to the measured data.
Results tabulated in Table 2 and shown in Figure 2 and Figure 3 showed development of an initial peak of activity in parallel with that observed by the other means described previously. The peaks in Figure 2 and Figure 3 decreased rapidly to almost constant level. Full stabilization required up to 300 minutes and persisted for the remainder of the period of observation.  Table 2 and Figure 4 suggest that ventilation occurred in an orderly manner.
MEK, methanol and acetone are considerably more volatile than toluene (NIOSH, 2010) and are likely to evaporate preferentially based solely on vapor pressure. This study as well as evidence provided graphically of evaporation of the solvent (McManus & Haddad, 2019c) support the contention that preferential evaporation is occurring and is readily detectable during source characterization of long duration. In a real-world application of software for predicting workplace exposure, this observation poses the question about the time in the sequence of the measurements at which the data reflect the reality of the situation? How could the user of the software know about this a priori without detailed time-based measurement of the type demonstrated here? Is a brief assessment of measured quantities acceptable for use in a model having influence in potentially major decisions?
Industrial solvents such as lacquer thinner typically are mixtures. Industrial chemical products often contain small quantities of impurities. Impurities such as benzene in an industrial chemical product or solvent for example have considerably complicated assessment of exposure to many petroleum products using conventional methods of measurement (Schlecht & O'Connor, 2003). Preferential evaporation combined with very brief characterization of evaporating sources considerably complicates assessment of exposure to mixtures of ingredients. The variation in evaporation observed at the beginning of these tests and expressed in several ways in a system believed at a first level of approximation to behave in a constant manner illustrates the need for care and diligence in source characterization as an input into the mathematical models used for prediction of exposure to airborne hazardous substances.

Conclusions
Data from a real-world, solvent system captured minute-by-minute over a long period of time showed the importance of full characterization prior to use in software models used for making predictions of phenomena occurring over the same duration. Further investigation is necessary in order to elucidate guidelines for establishing the means to determine limits of acceptance of data obtained through measurements of short duration for use in making meaningful predictions over considerably longer duration. This investigation further highlighted the need to identity limitations and assumptions intrinsic in software models but undisclosed, undiscussed or unexplained in documentation. Continued investigation to identify factors influencing outcomes suggested by software models is essential to optimize the credibility and hence utility of these tools.