In any health risk related event such as an air hazardous release, a key question that needs to be addressed is, to what extent an individual is exposed to a hazardous pollutant/agent for a specific time interval at a specific location downstream the release. Any systematic and reliable approach on this problem especially at the application/operation level, requires the knowledge not only of the exposure itself, but also its associated uncertainty quantified using probability density functions. A radically new approach is proposed (a) by making full use of the real detailed inlet flow and release rate signals, (b) by performing a limited number of flow and dispersion simulations in comparison to straightforward approaches, dealing only with steady state and reference inflow and release conditions and (c) by projection of the steady state/reference results to real conditions via appropriate novel scaling approaches based on experimental evidence and theory. A validation exercise has been performed with remarkable results using the well-studied University of Hamburg S2 Michelstadt Wind Tunnel Experiment with building structures representing distinct characteristics of typical central European cities. The message is for an attractive approach that needs further validation with the help of carefully designed experiments combined with dispersion model adjustments and improvements.

When considering accidental or/and deliberate releases of airborne hazardous substances the release duration is often short and in most cases not precisely known. The downstream exposure in those cases is stochastic due to ambient turbulence and strongly dependent on the release duration. Depending on the adopted modelling approach, a relatively large number of dispersion simulations may be required to assess exposure and its statistical behaviour. The present study introduces a novel approach aiming to replace the large number of the abovementioned simulation scenarios by only one simulation of a corresponding continuous release scenario and to derive the exposure-related quantities for each finite-duration release scenario by simple relationships. The present analysis was concentrated on dosages and peak concentrations as the primary parameters of concern for human health. The experimental and theoretical analysis supports the hypothesis that the dosage statistics for short releases can be correlated with the corresponding continuous release concentration statistics. The analysis shows also that the peak concentration statistics for short-duration releases in terms of ensemble average and standard deviation are well correlated with the corresponding dosage statistics. However, for more reliable quantification of the associated correlation coefficients further experimental and theoretical research is needed. The probability/cumulative density function for dosage and peak concentration can be approximated by the beta function proposed in an earlier work by the authors for continuous releases.

The aim of this work is to develop an algorithm that is able to provide predictions of wind speed statistics (WSS) in renewable energy environments. The subject is clearly interesting, as predictions of storms and extreme winds are important for decision makers and emergency response teams in renewable energy environments, e.g., in places where wind turbines could be located, including cities. The goal of the work is achieved through two phases: (a) During the preparation phase, the construction of a big WSS database based on computational fluid dynamics (CFD) is carried out, which includes flow fields of different wind directions in all grid numerical points; (b) In the second phase, the algorithm is used to find the records in the WSS database with the closest meteorological conditions to the meteorological conditions of interest. The evaluation of the CFD model (including both RANS and LES turbulence methodologies) is performed using the experimental data of the MUST (Mock Urban Setting Test) wind tunnel experiment.

This work was performed in the framework of the Urban Dispersion INternational Evaluation Exercise (UDINEE) project, coordinated by the European Commission’s Joint Research Centre. The case study was the Joint Urban 2003 (JU2003) experimental campaign, carried out in the central area of Oklahoma City, Oklahoma, USA. The UDINEE project concerned the cases of puff dispersion of JU2003, which are of special interest to scenarios related to security studies, such as explosions of radiological dispersal devices. Starting from the fact that puff-dispersion variability is substantial, especially in complicated urban built-up areas, even for puffs released under similar meteorological conditions, a methodology is presented for assessing this variability and it is applied to the dispersion of puffs in two of the Intensive Operation Periods of JU2003. A Lagrangian and a Eulerian dispersion model are applied for the computational simulations. For the Lagrangian model, variability is assessed by repeating the computations a large number of times. For the Eulerian model, variability is assessed by constructing the probability density functions of concentrations on the basis of the dispersion model results. Peak concentrations, dosages, puff-arrival times and puff durations have been considered. Percentiles calculated by the Lagrangian model for all the above quantities and by the Eulerian model for peak concentrations and dosages are compared with the measurements. The results are encouraging since in several cases the measured and computed ranges of values overlap.

Accurate prediction of the wind speed probabilities in the atmospheric surface layer is very important for wind energy assessment studies and many other practical applications such as the design and operation of wind turbines and human exposure to wind extremes. In a recent study, an optimized beta distribution was developed for the prediction of the wind speed probabilities in the atmospheric surface layer. Various uncertainties arise in real scenarios due to the composite atmospheric variability, the topography of the terrain, nearby obstacles, orographical features, and other synoptic conditions. Thus, in the first part of this study, the beta distribution is validated further with the wind speed database of the FUSION Field Trial 2007 (FFT-07) tracer field experiment for various atmospheric stability conditions. The model is applied without any change in its constants and a high degree of agreement with the field experiment is achieved. One main advantage of the proposed beta distribution is that it can be incorporated in computational models that are able to predict the mean, the variance and the integral time scale of the wind speed. The second part of the paper includes the incorporation of the beta distribution in the Reynolds Averaged Navier Stokes (RANS) methodology. Initially, the “RANS-beta” model is validated against wind speed measurements performed in a wind tunnel over a rough ground. The wind speed 25^{th}, 50^{th} and 75^{th} percentiles were found to be highly dependent on the height and the model gave comparable results with the experiment. Then, the wind speed database of the field experiment JU2003 is used to examine the “RANS-beta” model’s performance. The 25^{th}, 50^{th}, 75^{th} and 95^{th} model percentiles at 20 sensors located inside the complex urban area were found to be in good agreement with the experimental ones (FAC2=0.8).

Scientific Journals

Journal of Wind Engineering and Industrial Aerodynamics, 184, 247-255

Publication year: 2019

Ιn case of the dispersion of an airborne material from a point source in an urban environment the reliable prediction of the concentration statistical distribution by a numerical dispersion model presupposes the capability of the model to predict at least four statistical moments (mean, variance, skewness and kurtosis). In the present study, the beta distribution, the selection of which is justified based on a previous study, is incorporated in the Reynolds Averaged Navier Stokes (RANS) methodology. The shape parameters of the beta distribution are calculated using the numerical results of the mean, variance and maximum concentration. The latest is calculated through a deterministic model which uses also the numerical results of the mean and variance concentration as well as a hydrodynamic time scale. The validation of the new hybrid model “RANS-beta” is performed using the experimental dataset of the MUST wind tunnel experiment. The performance of the “RANS-beta” model for the skewness is very good (FAC2=0.811) while for the kurtosis it is acceptable (FAC2=0.557). The discrepancies are observed mainly at the edges of the plume. Future research will be focused on the optimization of the mean flow field, turbulent quantities (Reynolds stresses, turbulent kinetic energy) and on new parameterizations for the turbulent diffusion term.

Scientific Journals

International Journal of Environment and Pollution, 65, (1-3), 125-148

Publication year: 2019

One of the goals of research on indoor air quality is the reduction of human exposure due to the dispersion of hazardous airborne materials. The purpose of this study is to analyse, by using Computational Fluid Dynamics (CFD), the flow and the concentration patterns of floor-emitted pollutants inside a real, mechanically ventilated office of simple geometry. The simulation results show complex airflow and high heterogeneity of concentration distribution. Another objective of the study is to examine how alternative ventilation scenarios (vents’ position and flow strength) could affect the human exposure in the same office. Furthermore, additional simulations and sensitivity tests are performed in order to discuss CFD reliability issues. Studies like this contribute to the determination of the parameters that influence the modelling results and prepare the ground for improved and more reliable future simulations of indoor pollutant dispersion.

The Material Intensity (MI) of the economy remains among the most widely cited indicators, in international statistics and reports, evaluating the efficient use and productivity of natural resources in the economic process. In the context of the contemporary Economy-Wide Material Flow Accounting framework, the Material Intensity of a country is evaluated through the estimation of the ratio of the Domestic Material Consumption (DMC) to the Gross Domestic Product (GDP) index (DMC/GDP). Indeed, the essential contribution of natural resources to the economic process requires the establishment of reliable projections of this intricate relationship to the future. These projections may provide critical information to policy makers and practitioners in order to evaluate the future dynamics of the efficient use of natural resources in the production process. Towards this objective, the present study evaluates and proposes an alternative novel methodology for Material Intensity statistical projections, based on the Beta distribution, by using a deterministic model for predicting the maximum expected values. The parameters of the deterministic model are calculated from the estimated Material Intensity of the global economy. The evaluation of the model is then performed by using Material Intensity estimates from 107 individual countries. The agreement between the model and the estimates is very good. The proposed method’s merit is its simplicity, as by using two statistics of the Material Intensity (mean and variance) and an integral time scale, it is feasible to calculate the probabilities of the Material Intensity of any country with a high degree of confidence. The proposed method may have useful implications concerning the forecasting of future resources consumption trends, for evaluating resources productivity and setting sustainability goals, in the context of the Economy-Wide Material Flow Accounting framework.

The intentional or accidental release of airborne toxics poses great risk to the public health. During these incidents, the greatest factor of uncertainty is related to the location and rate of released substance, therefore, an information of high importance for emergency preparedness and response plans. A novel computational algorithm is proposed to estimate, efficiently, the location and release rate of an airborne toxic substance source based on health effects observations; data that can be readily available, in a real accident, contrary to actual measurements. The algorithm is demonstrated by deploying a semi-empirical dispersion model and Monte Carlo sampling on a simplified scenario. Input data are collected at varying receptor points for toxics concentrations (C; standard approach) and two new types: toxic load (TL) and health effects (HE; four levels). Estimated source characteristics are compared with scenario values. The use of TL required the least number of receptor points to estimate the release rate, and demonstrated the highest probability (>90%). HE required more receptor points, than C, but with lesser deviations while probability was comparable, if not better. Finally, the algorithm assessed very accurately the source location when using C and TL with comparable confidence, but HE demonstrated significantly lower confidence.

Scientific Journals

Journal of Wind Engineering & Industrial Aerodynamics, 177, 101–116.

Publication year: 2018

Large Eddy Simulation (LES) of atmospheric flows has become an increasingly popular modelling

approach within the last years, as it has the potential to provide deeper insight into unsteady flow

phenomena. LES can be improved and validated using specifically designed and well documented wind

tunnel datasets. In this work, we evaluate the performance of LES against a wind tunnel experiment in a

semi-idealized city (“Michel-Stadt”; CEDVAL-LES database) and use the LES results to study the

structure of the turbulent flow at the particular urban area. The first, second and third order statistics are

presented, as well as velocity frequency distributions and energy spectra. The results compare well with

the experimental values. Information about special features of the flow field is also provided. A particular

focus of this work is put on the influence of grid resolution on the results. Five different grids are

examined and the required resolution for turbulent flow within the canopy layer is evaluated. This study

reveals the strong potential of LES for urban flow simulations. It is shown that LES can assess highly

non-Gaussian flow behaviour in street canyons, which has implications for urban ventilation, wind

comfort assessment and urban design.

In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 hour from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.

The estimation of a hazardous contaminant unknown source characteristics (i.e., rate and location) in a complex urban environment using efficient inverse modelling techniques is a challenging problem that involves advanced computational fluid dynamics combined with appropriate mathematical algorithms. In this paper we further assess our recently proposed inverse source term estimation method (Efthimiou et al., 2017, Atmos. Environ., 170, 118-129) by applying it in two wind tunnel experiments simulating atmospheric flow and tracer dispersion following a stationary release in realistic urban settings, namely Michelstadt and Complex Urban Terrain Experiment (CUTE). The method appears to be robust and to predict with encouraging accuracy the source location and emission rate for both wind tunnel experiments.

Abstract

Radiation from the deposited radionuclides is indispensable information for environmental impact assessment of nuclear power plants and emergency management during nuclear accidents. Ground shine estimation is related to multiple physical processes, including atmospheric dispersion, deposition, soil and air radiation shielding. It still remains unclear that whether the normally adopted “inﬁnite plane” source assumption for the ground shine calculation is accurate enough, especially for the area with highly heterogeneous deposition distribution near the release point. In this study, a new ground shine calculation scheme, which accounts for both the spatial de- position distribution and the properties of air and soil layers, is developed based on point kernel method. Two sets of “detector-centered” grids are proposed and optimized for both the deposition and radiation calculations to better simulate the results measured by the detectors, which will be beneﬁcial for the applications such as source term estimation. The evaluation against the available data of Monte Carlo methods in the literature indicates that the errors of the new scheme are within 5% for the key radionuclides in nuclear accidents. The comparisons between the new scheme and “inﬁnite plane” assumption indicate that the assumption is tenable (relative errors within 20%) for the area located 1km away from the release source. Within 1km range, the assumption mainly causes errors for wet deposition and the errors are independent of rain intensities. The results suggest that the new scheme should be adopted if the detectors are within 1km from the source under the stable atmosphere (classes E and F), or the detectors are within 500m under slightly unstable (class C) or neutral (class D) atmosphere. Otherwise, the inﬁnite plane assumption is reasonable since the relative errors induced by this assumption are within 20%. The results here are only based on theoretical investigations. They should be further thoroughly evaluated with real measurements in the future.

Scientific Journals

Journal of Industrial Ecology, DOI: 10.1111/jiec.12667, (Impact factor: 4.123)

Publication year: 2017

The Material Intensity (MI) of the economy remains among the most widely cited indicators, in international statistics and reports, evaluating the efficient use and productivity of natural resources in the economic process. In the context of the contemporary Economy-Wide Material Flow Accounting framework, the Material Intensity of a country is evaluated through the estimation of the ratio of the Domestic Material Consumption (DMC) to the Gross Domestic Product (GDP) index (DMC/GDP). Indeed, the essential contribution of natural resources to the economic process requires the establishment of reliable projections of this intricate relationship to the future. These projections may provide critical information to policy makers and practitioners in order to evaluate the future dynamics of the efficient use of natural resources in the production process. Towards this objective, the present study evaluates and proposes an alternative novel methodology for Material Intensity statistical projections, based on the Beta distribution, by using a deterministic model for predicting the maximum expected values. The parameters of the deterministic model are calculated from the estimated Material Intensity of the global economy. The evaluation of the model is then performed by using Material Intensity estimates from 107 individual countries. The agreement between the model and the estimates is very good. The proposed method’s merit is its simplicity, as by using two statistics of the Material Intensity (mean and variance) and an integral time scale, it is feasible to calculate the probabilities of the Material Intensity of any country with a high degree of confidence. The proposed method may have useful implications concerning the forecasting of future resources consumption trends, for evaluating resources productivity and setting sustainability goals, in the context of the Economy-Wide Material Flow Accounting framework.

Scientific Journals

Meteorology and Atmospheric Physics. DOI 10.1007/s00703-017-0506-0 (Impact factor 1.159)

Publication year: 2017

One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)–Reynolds-averaged Navier–Stokes (RANS) methodology can be used to predict ensemble average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model’s predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 ‘‘Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments’’.

The wide range of values observed in a measured concentration time series after the release of a dispersing airborne pollutant from a point source in the atmospheric boundary layer, and the hazard level associated with the peak values, demonstrate the necessity of predicting the concentration probability distribution. For this, statistical models describing the probability of occurrence are preferably employed. In this paper a concentration database pertaining to a field experiment of dispersion in an urban-like area (MUST experiment) from a continuously emitting source is used for the selection of the best performing statistical model between the Gamma and the Beta distributions. The skewness, the kurtosis as well as the inverses of the cumulative distribution function were compared between the two statistical models and the experiment. The evaluation is performed in the form of validation metrics such as the Fractional Bias (FB), the Normalized Mean Square Error and the factor-of-2 percentage. The Beta probability distribution agreed with the experimental results better than the Gamma probability distribution except for the 25th percentile. Also according to the significant tests using the BOOT software the Beta model presented FB and NMSE values that are statistical different than the ones of the Gamma model except the 75th percentiles and the FB of the 99th percentiles. The effect of the stability conditions and source heights on the performance of the statistical models is also examined. For both cases the performance of the Beta distribution was slightly better than that of the Gamma.

Scientific Journals

Journal of Loss Prevention in the Process Industries, 46, 23-36 (Impact factor: 1.818)

Publication year: 2017

This work presents the Computational Fluid Dynamics (CFD) – Reynolds-Averaged Navier Stokes (RANS) simulation and results of the dispersion of a hazardous airborne material which was released during a real accident in an industrial facility. During the event the material was released as flashing liquid from a point located inside a building, partially opened to the outside. According to estimations, after having evaporated, a total material mass in the order of 900 kg escaped to the atmosphere with a time-varying rate from five main openings of the building. The CFD simulations presented here concern the dispersion of the gaseous substance outside the building from where it was released but inside the complex industrial site. The few meteorological measurements that were available in the area have been used for the construction of inlet flow boundary conditions for the model. To take into account the uncertainty due to the scarcity of the available in situ meteorological data, and the temporal variability of the wind direction, three different incident wind directions have been considered in the computations. In each case, the CFD-RANS results for concentration have been compared with the real measurements recorded by the sensors during the accident, paired in space and time. The performance of the model depends on the incident wind direction and it varies depending on the position of the sensors. However, the overall agreement with the observations is considered as satisfactory taking into account the complexity of the problem and the uncertainties associated with the emission rate and the meteorological conditions. The maximum measured concentrations near the sources are predicted very well by the model.

Scientific Journals

Journal of Turbulence, 18, 2, 115-137 (Impact factor: 1.472)

Publication year: 2017

One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict individual exposure (maximum dosages) of an airborne material which is released continuously from a point source. The present work addresses the question whether the computational fluid dynamics (CFD)–Reynolds averaged Navier–Stokes (RANS) methodology can be used to predict individual exposure for various exposure times. This is feasible by providing the two RANS concentration moments (mean and variance) and a turbulent time scale to a deterministic model. The whole effort is focused on the prediction of individual exposure inside a complex real urban area. The capabilities of the proposed methodology are validated against wind-tunnel data (CUTE experiment). The present simulations were performed ‘blindly’, i.e. the modeller had limited information for the inlet boundary conditions and the results were kept unknown until the end of the COST Action ES1006. Thus, a high uncertainty of the results was expected. The general performance of the methodology due to this ‘blind’ strategy is good. The validation metrics fulfil the acceptance criteria. The effect of the grid and the turbulence model on the model performance is examined.

Scientific Journals

Atmospheric Environment, 170, 118-129 (Impact factor: 3.629)

Publication year: 2017

An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a “satisfactory” determination of the source. The probability of obtaining a satisfactory solution – according to specified criteria –by the new method has been assessed as function of the number of sensors that constitute the measurement network.

Scientific Journals

Boundary Layer Meteorology, 163, 2, 179-201 (Impact factor: 2.573)

Publication year: 2017

Wind ﬁelds in the atmospheric surface layer (ASL) are highly three-dimensional and characterized by strong spatial and temporal variability. For various applications such as wind-comfort assessments and structural design, an understanding of potentially hazardous wind extremes is important. Statisticalmodels are designed to facilitate conclusions about the occurrence probability of wind speeds based on the knowledge of low-order ﬂow statistics. Being particularly interested in the upper tail regions we show that the statistical behaviour of near-surface wind speeds is adequately represented by the Beta distribution. By using the properties of the Beta probability density function in combinationwith amodel forestimating extreme values based on readily available turbulence statistics, it is demonstrated that this novel modelling approach reliably predicts the upper margins of encountered wind speeds. The model’s basic parameter is derived from three substantially different calibrating datasets of ﬂow in the ASL originating from boundary-layer wind-tunnel measurements and direct numerical simulation. Evaluating the model based on independent ﬁeld observations of near- surface wind speeds shows a high level of agreement between the statistically modelled horizontal wind speeds and measurements. The results show that, based on knowledge of only a few simple ﬂow statistics(mean wind speed, wind-speed ﬂuctuations and integraltime scales), the occurrence probability of velocity magnitudes at arbitrary ﬂow locations in the ASL can be estimated with a high degree of conﬁdence.

Scientific Journals

Radioprotection 51(HS2), S101-S103 (Impact factor: 0.508)

Publication year: 2016

New schemes for calculating particles dry and wet deposition have been implemented in the Local Scale Model Chain of RODOS in the frame of Work Package 4 of project PREPARE. Care has been taken not to increase computational times and at the same time, simulations performed with previous deposition schemes to be reproducible. The new schemes take into account particles properties: size and density. Important assumptions adopted so far are that the particles properties are the same for all nuclides and that the properties remain constant during all times of release and dispersion. All formulations adopted for calculating dry and wet deposition coefﬁcients for particles depending on their properties are widely used in the scientiﬁc literature. The ﬁrst tests that have been performed show that the new particles deposition schemes behave as expected and deposition is scaled depending on the size and the density of the particles. The calculated deposition patterns for ﬁne particles with the newscheme are verysimilar to those calculated by the previous scheme. For larger particles, the differences in comparison to results with the previous deposition scheme justify the implementation of the new scheme in JRODOS.

Scientific Journals

Environ Fluid Mech, 16, 5, 899–921, DOI 10.1007/s10652-016-9455-2 (http://rdcu.be/j93f) (Impact factor: 1.603)

Publication year: 2016

The peak values observed in a measured concentration time series of a dispersing gaseous pollutant released continuously from a point source in urban environments, and the hazard level associated with them, demonstrate the necessity of predicting the upper tail of concentration distributions. For the prediction of concentration distributions statistical models are preferably employed which provide information about the probability of occurrence. In this paper a concentration database pertaining to a field experiment is used for the selection of the statistical distribution. The inverses of the gamma cumulative distribution function (cdf) for 75th–99th percentiles of concentration are found to be more consistent with the experimental data than those of the log-normal distribution. The experimental values have been derived from measured high frequency time series by sorting first the concentrations and then finding the concentration which corresponds to each probability. Then the concentration mean and variance that are predicted with Computational Fluid Dynamics-Reynolds Averaged Navier–Stokes (RANS) methodology are used to construct the gamma distribution. The proposed model (‘‘RANS-gamma’’) is included in the framework of a computational code (ADREA-HF) suitable for simulating the dispersion of airborne pollutants over complex geometries. The methodology is validated by comparing the inverses of the model cdfs with the observed ones from two wind tunnel experiments. The evaluation is performed in the form of validation metrics such as the fractional bias, the normalized mean square error and the factor-of-two percentage. From the above comparisons it is concluded that the overall model performance for the present cases is satisfactory.

Scientific Journals

Atmospheric Environment, 100, 48-56 (Impact factor: 3.629)

Publication year: 2015

One of the key issues in recent research on dispersion in complex urban areas is the ability predicting high concentrations and concentration distribution of a pollutant released continuously from a point source. The present work addresses the question whether the CFD-RANS methodology can provide valid predictions of concentration peaks and distributions. A probabilistic and a deterministic approach are incorporated in the CFD-RANS code ADREA. Innovative algebraic equations for the calculation of the concentration time scales as a function of the hydrodynamic and pollutant travel times are used. The capabilities of the new methodology are validated against wind tunnel experimental data under well described boundary conditions and representative concentration measurements. The comparisons of model and wind tunnel gave fairly good results.

Scientific Journals

Journal of Hazardous Materials, 285, 37–45 (Impact factor: 6.025)

Publication year: 2015

A wide range of consumer and personal care products may, during their use, release signiﬁcant amounts of volatile organic compounds (VOC) into the air. The identiﬁcation and quantiﬁcation of the emissions from such sources is typically performed in emission test chambers. A major question is to what degree the obtained emissions are reproducible and directly applicable to real situations. The present work attempts partly to address this question by comparison of selected VOC emissions in speciﬁc consumer products tested in chambers of various dimensions. The measurements were performed in three test chambers 3 of different volumes (0.26–20m ). The analytic performance of the laboratories was rigorously assessed prior to chamber testing. The results show emission variation for major VOC (terpenes); however, it remains in general, within the same order of magnitude for all tests. This variability does not seem to correlate with the chamber volume. It rather depends on the overall testing conditions. The present work is undertaken in the frame of EPHECT European Project.

Scientific Journals

Part ΙI: Validation of a Deterministic Model with Wind Tunnel Experimental Data, Toxics, 3(3), 259-267; doi:10.3390/toxics3030259

Publication year: 2015

The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.

Scientific Journals

Part I: Validation of a Deterministic Model with Field Experimental Data, Toxics, 3, 259-267, doi:10.3390/toxics3030259

Publication year: 2015

The release of airborne hazardous substances in the atmosphere has a direct effect on human health as, during the inhalation, an amount of concentration is inserted through the respiratory system into the human body, which can cause serious or even irreparable damage in health. One of the key problems in such cases is the prediction of the maximum individual exposure. Current state of the art methods, which are based on the concentration cumulative distribution function and require the knowledge of the concentration variance and the intermittency factor, have limitations. Recently, authors proposed a deterministic approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. The purpose of the first part of this study is to validate the deterministic approach with the extensive dataset of the MUST (Mock Urban Setting Test) field experiment. This dataset includes 81 trials, which practically cover various atmospheric conditions and stability classes and contains in total 4004 non-zero concentration sensor data with time resolutions of 0.01–0.02 s. The results strengthen the usefulness of the deterministic model in predicting short-term maximum individual exposure. Another important output is the estimation of the methodology uncertainty involved.

Scientific Journals

Journal of Hazardous Materials, 300, 182–188 (Impact factor: 6.025)

Publication year: 2015

A key issue, in order to be able to cope with deliberate or accidental atmospheric releases of hazardous substances, is the ability to reliably predict the individual exposure downstream the source. In many situations, the release time and/or the health relevant exposure time is short compared to mean concentration time scales. In such a case, a signiﬁcant scatter of exposure levels is expected due to the stochastic nature of turbulence. The problem becomes even more complex when dispersion occurs over urban environments. The present work is the ﬁrst attempt to approximate on generic terms, the statistical behavior of the abovementioned variability with a beta distribution probability density function (beta-pdf) which has proved to be quite successful. The important issue of the extreme concentration value in beta-pdf seems to be properly addressed by the [5] correlation in which global values of its associated constants are proposed. Two substantially different datasets, the wind tunnel Michelstadt experiment and the ﬁeld Mock Urban Setting Trial (MUST) experiment gave clear support to the proposed novel theory and its hypotheses. In addition, the present work can be considered as basis for further investigation and model reﬁnements.

Scientific Journals

Science of the Total Environment, 536, 890–902 (Impact factor: 4.9)

Publication year: 2015

Within the framework of the EPHECT project (Emissions, exposure patterns and health effects of consumer products in the EU), irritative and respiratory health effects were assessed in relation to acute and long-term exposure to key and emerging indoor air pollutants emitted during household use of selected consumer products. In this context, inhalation exposure assessment was carried out for six selected ‘target’ compounds (acrolein, formaldehyde, benzene, naphthalene, d-limonene and α-pinene). This paper presents the methodology and the outcomes from the micro-environmental modelling of the ‘target’ pollutants following single or multiple use of selected consumer products and the subsequent exposure assessment. The results indicate that emissions from consumer products of benzene and α-pinene were not considered to contribute signiﬁcantly to the EU indoor background levels, in contrast to some cases of formaldehyde and d-limonene emissions in Eastern Europe (mainly from cleaning products). The group of housekeepers in East Europe appears to experience the highest exposures to acrolein, formaldehyde and benzene, followed by the group of the retired people in North, who experiences the highest exposures to naphthalene and α-pinene. High exposure may be attributed to the scenarios developed within this project, which follow a ‘most-representative worst-case scenario’ strategy for exposure and health risk assessment. Despite the above limitations, this is the ﬁrst comprehensive study that provides exposure estimates for 8 population groups across Europe exposed to 6 priority pollutants, as a result of the use of 15 consumer product classes in households, while accounting for regional differences in uses, use scenarios and ventilation conditions of each region.

Scientific Journals

Validation and intercomparison studies, Int. J. Environment and Pollution, 55, 76-85 (Impact factor: 0.515)

Publication year: 2014

In a previous study, new approaches have been introduced in the CFD-RANS modelling, according to which the concentration time scales are estimated as a function not only of the flow turbulence time scales but also of the pollutant travel times. The new approaches have been implemented for the calculation of the concentration fluctuation dissipation time scale and the maximum individual exposure at short time intervals using the k-ζ model for turbulence parameterisation. The purpose of this study is to implement and validate again the new methodology using the widely known standard k-ε model. The validation is performed using two selected trials of the MUST experiment under neutral conditions. Special emphasis is given on the selection of the constant value of the concentration fluctuation dissipation time scale when the k-ε model is used. Also, an intercomparison of the results between the two turbulence models is performed with a view to identifying model strengths and limitations.

Scientific Journals

Journal of Computational Physics, 231, 6725-6753 (Impact factor: 2.744)

Publication year: 2012

Local grid reﬁnement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the ﬁnite volume method no single local reﬁnement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid reﬁnement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic ﬁnite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reynolds numbers. It is shown that criteria where the truncation error is weighted by the volume of the grid cells perform better than using just the truncation error as the criterion. Also it is observed that the efﬁciency of local reﬁnement increases with the Reynolds number. The truncation error is estimated by restricting the solution to a coarser grid and applying the coarse grid discrete operator. The complication that high truncation error develops at grid level interfaces is also investigated and several treatments are tested.