Measurable Results.

Doneda and Almeida suggest that the time may have come to apply governance to algorithms because of the growing risks of intentional or unintentional, “… manipulation, biases, censorship, social discrimination, violations of privacy and property rights and more,” through the dynamic application of a relatively static algorithm to a relatively dynamic data set. By way of example, we have probably all experienced the unintended consequences of the application of a reasonably well understood algorithm to new data. We all have a basic grasp of what the Google search algorithm will do for us but some of you might have experienced embarrassment like mine when I typed in a perfectly innocent search term without thinking through the possible alternative meanings of that set of words (No, I’m not going to share). At the other end of the spectrum from the risk of relatively harmless misunderstandings, there is a risk that algorithms can be intentionally manipulative – the VW emission control algorithm that directed different behavior when it detected a test environment is a good example. For those of us who deal with outsourcing software development, it is impossible to test every delivered algorithm against every possible set of data and then validate the outcomes. If we consider software value, from a governance perspective, it should be desirable to understand how many algorithms we own and what they are worth. Clearly, the Google search algorithm is worth more than my company. But, are there any algorithms in your company’s software that represent trade secrets or even simple competitive differentiators? Which are the most valuable? How could their value be improved? Are they software assets that should be inventoried and managed? Are they software assets that could be sold or licensed? If data can gather and sell data then why not algorithms? From a software metrics perspective, it should be easy to identify and count the algorithms in a piece of software. Indeed, function point analysis might be a starting point using its rules for counting unique transactions, each of which presumably involves one or more algorithms, though it would be necessary to identify those algorithms that are used by many unique transactions (perhaps as a measure of the value of the algorithm?). Another possible perspective on the value of the algorithm might be on the nature of the data it processes. Again, function points might offer a starting point here but Doneda and Almeida offer a slightly different perspective. They mention three characteristics of the data that feeds “Big Data” algorithms, “… the 3 V’s: volume (more data are available), variety (from a wider number of sources), and velocity (at an increasing pace, even in real time). It seems to me that these characteristics could be used to form a parametric estimate of the risk and value associated with each algorithm. It is interesting to me that these potential software metrics appear to scale similarly for software value and software risk. That is, algorithms that are used more often are more valuable yet carry with them more risk. The same applies to algorithms that are potentially exposed to more data. [1] Doneda, Danilo & Almeida, Virgilio A.F. “What is Algorithm Governance.” IEEE Computer Edge. December 2016. Mike Harris, CEO |