Spectrograph

Last night at the EDM Council meeting, Dave Newman from Wells Fargo used spectroscopy as an analogy to the process used to decompose business concepts into their constituent parts. The more I’ve been thinking about it the more I like it. Spectrograph Last week I was staring at the concept “unemployment rate” as part of a client project. Under normal light (that is, using traditional modeling approaches) we’d see “unemployment rate” as a number, probably attach it to the thing that the unemployment rate was measuring, say the US Economy, and be done with it. But when we shine the semantic spectrometer at it, the constituent parts start to light up. It is a measurement. Stare at those little lines a bit harder: the measurement has a value (because the term “value” is overloaded, in gist we’d say it has a magnitude) and the magnitude is a percentage. Percentages are measured in ratios and this one (stare a little harder at the spectrograph) is the ratio of two populations (in this case, groups of humans). One population consists of those people who are not currently employed and who have been actively seeking employment over the past week, and the other is the first group plus those who are currently employed. These two populations are abstract concepts until someone decides to measure unemployment. At that time, the measurement process has us establish an intensional group (say residents of Autuaga County, Alabama) and perform some process (maybe a phone survey) of some sample (a sub population) of the residents. Each contacted resident is categorized into one of three sub sub populations (currently working, currently not working and actively seeking work, and not working and not actively seeking work). Note: there is another group that logically follows from this decomposition, is not of interest to the Bureau of Labor Standards, but is of interest to recruiters: working and actively seeking employment. Finally, the measurement process dictates whether the measure is a point in time or an average of several measures made over time. This seems like a lot of work for what started as just a simple number. But look at what we’ve done: we have a completely non-subjective definition of what the concept means. We have a first class concept that we can associate with many different reference points, for example, the same concept can be applied to National, State, or Local unemployment. An ontology will organize this concept in close proximity to other closely-related concepts. And the constituent parts of the model (the populations for instance) are now fully reusable concepts as well. The other thing of interest is that the entire definition was built out of reusable parts (Magnitude, Measurement, Population, Measurement Process, Residence, and Geographic Areas) that existed (in gist) prior to this examination. The only thing that needed to be postulated to complete this definition was what would currently be two taxonomic distinctions: working and seeking work. David, thanks for that analogy.Last night at the EDM Council meeting, Dave Newman from Wells Fargo used spectroscopy as an analogy to the process used to decompose business concepts into their constituent parts. The more I’ve been thinking about it the more I like it. Last week I was staring at the concept “unemployment rate” as part of a client project. Under normal light (that is, using traditional modeling approaches) we’d see “unemployment rate” as a number, probably attach it to the thing that the unemployment rate was measuring, say the US Economy, and be done with it. But when we shine the semantic spectrometer at it, the constituent parts start to light up. It is a measurement. Stare at those little lines a bit harder: the measurement has a value (because the term “value” is overloaded, in gist we’d say it has a magnitude) and the magnitude is a percentage. Percentages are measured in ratios and this one (stare a little harder at the spectrograph) is the ratio of two populations (in this case, groups of humans). One population consists of those people who are not currently employed and who have been actively seeking employment over the past week, and the other is the first group plus those who are currently employed. These two populations are abstract concepts until someone decides to measure unemployment. At that time, the measurement process has us establish an intensional group (say residents of Autuaga County, Alabama) and perform some process (maybe a phone survey) of some sample (a sub population) of the residents. Each contacted resident is categorized into one of three sub sub populations (currently working, currently not working and actively seeking work, and not working and not actively seeking work). Note: there is another group that logically follows from this decomposition, is not of interest to the Bureau of Labor Standards, but is of interest to recruiters: working and actively seeking employment. Finally, the measurement process dictates whether the measure is a point in time or an average of several measures made over time. This seems like a lot of work for what started as just a simple number. But look at what we’ve done: we have a completely non-subjective definition of what the concept means. We have a first class concept that we can associate with many different reference points, for example, the same concept can be applied to National, State, or Local unemployment. An ontology will organize this concept in close proximity to other closely-related concepts. And the constituent parts of the model (the populations for instance) are now fully reusable concepts as well. The other thing of interest is that the entire definition was built out of reusable parts (Magnitude, Measurement, Population, Measurement Process, Residence, and Geographic Areas) that existed (in gist) prior to this examination. The only thing that needed to be postulated to complete this definition was what would currently be two taxonomic distinctions: working and seeking work. David, thanks for that analogy.

Scroll to top
Skip to content