# Bayesian theory and Dynamic linear model overview

The Bayesian theory is used in demand planning and is applicable to the forecast engine of the type, BATS.

The Bayesian method was developed by the Reverend Thomas Bayes over 200 years ago. The essence of Bayes is how a belief may be modified in the outcome in the light of new evidence: 'belief' and 'evidence' may be based on hard facts, but are just as likely to be based on a more subjective view. The latter concept does not fit easily into the world of science; it does however closely resemble the real world where the question - 'how many packs of a new flavor of snack can we sell next month?' cannot be answered without subjective input.

The Dynamic Linear Model is the mathematical construct used to model a time series and incorporate and quantify a subjective input. The advantage of the DLM is that it is a superset of the existing techniques and is structured to support the Bayesian concept from inception.

At a basic level, the Bayes theorem describes how the chances of an event happening are altered by the occurrence of another event. Given a situation with hard evidence, it is easy to see how this may be applied, for example, what is the probability of picking an ace from a pack of cards, given that the 3 of clubs, the ace of hearts and the 6 of spades are already drawn?

However, there is another, more powerful way of interpreting Bayes' theorem. A belief in an initial theory, for example, how many packs of the new flavor snack can be sold, is firstly influenced by the results from the last time a new flavor was launched. It is subsequently influenced by the early sales figures as the product is rolled out. Bayes' theorem then turns into a recipe, highlighting how the original, "prior" belief must be updated in the light of the new evidence.

For example, suppose a researcher is conducting an experiment in which he is aware that the results are affected by whichever one of the many existing alternatives prevails. Although not certain as to which one of these alternatives ultimately prevail, nevertheless the researcher has some information based on which, he is willing to make a subjective judgement concerning the probabilities of the alternatives. Thus, the researcher assigns probabilities to all the alternatives before obtaining the experimental evidence.

Since, these probabilities primarily affect the researcher's judgement before an actual occurrence, these are known as prior probabilities. Now the researcher is in a position to obtain experimental evidence by collecting a set of data, and therefore the conditional probabilities can be computed. These probabilities are known as posterior probabilities, in the sense that these are determined after the experimental evidence is obtained.