Uncertainty quantification is the science of quantitative characterization and reduction of uncertainties in both computational and real-world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. Every system is influenced by uncertainties and using a proactive approach, the most influential uncertainties can be overcome with slight modifications of the initial design.

Our program is a non-intrusive uncertainty propagation approach and it is based on the High Dimensional Model Representation (HDMR), where each sub-function is treated independently. Each sub-function is called ”increment function” and as its name suggests, it adds a small increment to the nominal value. Nominal sample (or nominal value) can be understand as an initial design around which are uncertainties observed. In the mathematical formulation, the HDMR reads:

where* F(x) *represents the function of interest, superscript (in front of the letter)* ^{c}* in front of variable

*x*represents the nominal position of variable

_{i}*x*and

_{i}*dF*represents the increment function, which is defined in the following way:

However, working with integrals and derivatives is not convenient and therefore, the increment function is transformed into the analytic form, which reads for the first order:

for the second order, it reads:

and the higher order increment functions are defined accordingly.

The order of the increment function is given by the number of functional variables as it is shown above. The function of interest is fully defined with all increment functions, however, this represents an extreme computational burden, e.g. for a 3-D problem, it would be 125 samples of the expensive solver using only 5 samples per abscissa. Therefore, only important increment functions are selected using our selection scheme and such that reduces dramatically the number samples. The neglected increment functions are considered to be zero, i.e. they have a null influence on the final uncertainty. The selection scheme is part of the internal know-how of the UptimAI company.

Each increment function is interpolated/approximated with an independent model and sum of these models create the final interpolation/approximation model. Currently implemented surrogate models are:

**Lagrange interpolation:**Well known technique for simple functions.**ISI:**ISI stands for Independent Surrogate Interpolation and it is used for complex functions involving highly oscillatory and discontinues regions. ISI interpolation technique is part of the internal know-how of the UptimAI company.

The selection process for each model is fully automatic, i.e. the algorithm selects the best model for a given problem. This ensures that the process is efficient but also, very robust and the final model does not diverge. The selection process is part of the internal know-how of the UptimAI company.

This final surrogate model is used to propagate Monte Carlo (MC) samples establishing the final statistics for the selected problem. However, partial statistics can be obtained by application of MC directly to the interpolation model of the increment function. This allows to establish the statistical influence of each increment function and gain further insight into the problem. The application and use of each increment function are described in each section under UQ.

Very important aspect plays the nominal solution,* F( ^{c}x_{1}, …, ^{c}x_{n})*, which represents the deterministic solution for given nominal values of input distributions. The nominal sample (or solution) is defined in the preprocessing phase and by default, it is positioned in the mean value of a defined distribution. The nominal solution can be understood as a result, which would be obtained if the statistical approach would not have been taken.

NOTE: The nominal solution can be selected arbitrarily. However, in order to have an easier explanation of the results, it is suggested to use the mean value of the input distributions as a nominal sample.

## How to use it: #

On Fig. 1, it can be seen the initial state of the application’s window with three main sections to be found. On the left side of the window there is the *Method selection* frame to select methods of the program and their tools. For certain methods, an additional information about variables, increment functions, etc. may appear in the *Info frame *at the left bottom. The right side of the window is dedicated to the *Main working frame*, where functions of these tools are executed. Through the *Menu bar* in the top left corner one can call general functions of the program which are common for all methods.

*Fig. 1: Result Postprocess – Main window*

First, it is necessary to upload data file created with the core solver. Under *File* in the *Menu bar *select the *Load*. The software automatically navigates to the folder with a selected project, where usually all the data files are stored. Select the project ***.json** file, which holds all the required data and the software process all the work. Once the software creates all necessary data for fast use of the post-processing tool, select the method, e.g. UQ, and select the required analysis, e.g. Sensitivity analysis -> increment.

Each analysis tool has its own help, which can be accessed from *About -> Help*. The algorithm automatically recognizes the selected tool and gives the proper help.

If another file is required to process, first under *File* use* Clean* in order to clean the loaded data, then load the new ***.json** file.

The methods button (in green colour) closes all analysis and navigate back to the main selection. However, it does not erase the selected file from memory.

In *About* -> *Company* is a small description of our company with contact. If you have any question, please do not hesitate to contact us.