Summary and Info
Most successful applications of modern science and engineering, from discovering the human genome to predicting weather to controlling space missions, involve processing large amounts of data and large knowledge bases. The ability of computers to perform fast data and knowledge processing is based on the hardware support for super-fast elementary computer operations, such as performing arithmetic operations with (exactly known) numbers and performing logical operations with binary ("true"-"false") logical values. In practice, measurements are never 100% accurate. It is therefore necessary to find out how this input inaccuracy (uncertainty) affects the results of data processing. Sometimes, we know the corresponding probability distribution; sometimes, we only know the upper bounds on the measurement error -- which leads to interval bounds on the (unknown) actual value. Also, experts are usually not 100% certain about the statements included in the knowledge bases. A natural way to describe this uncertainty is to use non-classical logics (probabilistic, fuzzy, etc.).This book contains proceedings of the first international workshop that brought together researchers working on interval and probabilistic uncertainty and on non-classical logics. We hope that this workshop will lead to a boost in the much-needed collaboration between the uncertainty analysis and non-classical logic communities, and thus, to better processing of uncertainty.