View RSS Feed

Jimbo

Random Thoughts on Straight Razor Shaving

Rate this Entry
by , 04-21-2012 at 10:01 PM (7648 Views)
Many factors may influence the various facets of straight razor shaving and these are typically well, and repeatedly, discussed throughout the forum. Honing, stropping, technique, lathering... these all have had, and will no doubt continue to have, very thorough and close thought applied to them by the many and varied members of SRP. However, there is one important aspect of the topics of discussion that is always overlooked (or at least under-thought), and that is of role of stochastic or random processes.

It could well be imagined that for many the very idea that random behaviour plays any kind of role in the area of shaving is complete anathema. This is, by necessity, a very deterministic hobby and no one wants to think that the outcome of their daily shave is dependent on the vagaries of chance. However, when we consider the role of randomness we are not talking about gross outcomes, but rather the variations that occur at an individual level - to the inputs to the shave as it were. We are talking about the "YMMV" (Your Mileage May Vary) component that is present both between shavers and between shaves.

Take for example the circumstance of cutting or nicking oneself. It is a rare event to read someone post "Yes, I deliberately set out to cut my face this morning" (though based on some of the techniques I have seen people use over the years, this would be an honest comment to make....!! ). No, a nick or cut is most definitely not a deterministic event and therefore both the event itself and the component events that lead up to it must be considered as random.

Moreover, we are not robots. We do not have the ability, no matter how hard we try, to exactly replicate shaves: identical angles at identical times at identical places on the face with identical pressures, identical amount of soap applied to brushes with identical amounts of water in them etc. The whole idea is ridiculous. Similarly with honing, stropping, prep. and so on. We vary. We vary from each other, and we vary within ourselves. Some of this variation can be explained in deterministic ways - these are often the topics of discussion in the forums: technique, blade type, lather, prep etc. However, this variation both within and between shavers also has a distinct and non-trivial component of randomness, things we cannot quantify directly. Chance dogs our every shaving stroke.

Over the coming weeks and months I intend to discuss and (perhaps optimistically) explain in simple terms some of the common stochastic processes, probability distributions, and statistical techniques that might be applied to shaving. A lot of it will be theoretical in nature: I do not mean hardcore maths (though sometimes there might be maths); rather, I mean that I'll be discussing models. And models, as we all know, are just stylised versions of the real thing and are thus more often wrong than useful. However, it will at least keep me off the streets.

I'll leave you with a theorem and a thought. The theorem is known as Bayes' Theorem. Thomas Bayes, English cleric and part time mathematician (or vice-versa), lived and worked in the 1700's - interestingly, Bayes never published his work and the first formal definition to appear in the literature was published posthumously. In fact the French mathematician Laplace claimed to have rediscovered the theorem independently in the 19th century, thus continuing the well-worn path of European mathematicians claiming the work of their English betters as their own - see Liebnitz and Newton's calculus for an example that typifies the behaviour.... . But I digress...

Bayes' theorem, at its core, is about conditional probability and how to swap that around. For example, suppose you have a drug test taken on an Italian cyclist and it has come up positive. The manufacturers of the drug test know certain things about the test through the copious amounts of trials they must undertake to have their test sanctioned by the governing body of cycling. Among the most important of these are the false positive rate (the probability that the test is positive, given the drug was not taken) and the false negative (the probability that the test is negative, given the drug was taken). These two rates are known as conditional probabilities: conditional on the fact we know one thing (eg that the drug was not taken), what is the probability of another thing happening (eg the test shows positive)? There are of course other conditional probabilities associated with these tests: true positive and true negative, for example.

In any event, the cylist's lawyer is only interested in one thing: Given the test is positive, what is the probability that the cyclist actually took the drug? If this probability is small enough, reasonable doubt can be applied and his client can go on to cheat his way to the top of another Tour De France in the future. If we denote D = taken drug and P = positive drug test, we can write the probability the lawyer is interested in as P(D | P). However, what we actually know is P(P | D), or the true positive rate. Bayes' theorem lets us reverse this conditioning via:

P(A|B) = [P(B|A)P(A)]/P(B)

so long as P(B) is not equal to zero (since dividing by zero is bad, M'kay?)

You will sometimes see Bayes' theorem written in its "proportional" form (without the P(B) in the denominator, and with the equals sign replaced with the "proportional to" sign, which looks like an infinity open at the right hand side):

P(A|B) \propto P(B|A) P(A)

The reason I have gone into (way, way too much) detail about this is that we can also use Bayes' theorem, particularly in the second (proportional) form, as a probabilistic or stochastic model for the way humans learn through experience. (Some people disagree with this statement, but they are wrong as this is my blog. They can get their own blog and sprout their misinformation on their own time.)

I will cover this idea in more detail in the next blog post, but for now suffice it to say that P(A) represents uncertainty about something interesting (A) prior to obtaining data (B) about it. P(A|B) is what we know (or our updated uncertainty) about A after seeing some data. P(B|A) quantifies how much information the data contains regarding A. So, for example:

P(Brush Loading | 50 shaves) \propto P(50 shaves | Brush Loading) P(Brush Loading)

or in words, what we know about loading our brush with soap after 50 shaves (P(BL|50)) is a combination of what we knew about brush loading before we did 50 shaves (P(B L) and how much information those 50 shaves gave us about brush loading (P(50|B L)).

Seems simple and intuitive, and it is. The best mathematical theorems always are.

Until next time,

James.
Speedster likes this.
Categories
Uncategorized

Comments

  1. Link8382's Avatar
    Ok, now I am intrigued! I will be watching for updates.
    Jimbo likes this.
  2. Speedster's Avatar
    Takes some kinda bastard to sully the fun sport of wet shaving with mathematics and probability. Well done.
    Jimbo and Scookum like this.
  3. Slartibartfast's Avatar
    TL;DR

    .........
    Jimbo and HNSB like this.
  4. 32t's Avatar
    " how much information those 50 shaves gave us about brush loading (P(50|B L))."

    Interesting.How are you going to quantify this taking into consideration the [SBLA] the Shavers Brain Learning Ability?

    Tim
    Jimbo likes this.
  5. Jimbo's Avatar
    Quote Originally Posted by 32t
    " how much information those 50 shaves gave us about brush loading (P(50|B L))."

    Interesting.How are you going to quantify this taking into consideration the [SBLA] the Shavers Brain Learning Ability?

    Tim
    Models can be postulated for the "likelihood" function that contain deterministic components as well as the distributional probabilistic assumptions typically assumed for "measurement with error" experiments. Of course, the amount of information contained in data depends on how the data was obtained and for this part of the "learning model" we assume any two "rational" individuals faced with the same data will obtain the same information from it.

    Where we allow individuals to have different learning abilities is actually in the P(B L) part. Every individual will differ in this regard, and you'd be surprised how much flexibility simply allowing someone's prior beliefs to vary can induce.

    The next blog post will cover this stuff in more detail.

    James.
  6. hoglahoo's Avatar
    Is there some formula that can give us an idea when to expect the next update on these random thoughts?