Dreaming is Nursed in DARQness, Part 3: In Search of the Master Algorithm

In my last entry, I wrote about Machine Learning, the A in DARQ and I’d like to continue on this topic a bit more. My old friend from high school Kelly White also said he didn’t like the term “Artificial Intelligence” and instead simply used Machine Learning. This is similar to The Economist in its recent Technology Quarterly entitled “The Limits of AI” in which it was really explaining the limits of Machine Learning. Clearly stated, Machine Learning is a subfield of a greater field referred to as Artificial Intelligence. In other words, the term “Artificial Intelligence” is akin to the word mammal, and Machine Learning is a type of mammal, say a horse or dog. With that understanding, I’d like to discuss more about the broader field of Artificial Intelligence in general and which is related to the particular project that we’ll be undertaking with the AIAX Initiative with Eric Jensen of Causality Link. To do this, I’m going to have a take a bit of a tangent at first, namely, I’d like to discuss the search for the “master algorithm.”

The Master Algorithm

It is hard to say when the concept of the master algorithm was first introduced, but some say it goes back to as far as the origin of civilization. In the least, it started in the West with Pythagoras in southern Italy and progressed over time and variants. All that said, the Modern Age’s first serious candidate was the calculus ratiocinator of Gottfried Wilhelm von Leibniz. In his work, Leibniz predicted the recording of thought itself through a universal symbolic logic “where truth could be reduced to a kind of calculus.” Leibniz wasn’t just a thinker, he was a builder who constructed a “stepped reckoner” that executed a multiple-digit addition cycle with the revolution of stepped cylindrical gear. It was the supercomputer of its time that advanced that state of the art in accounting and finance. As Leibniz said, it was unworthy for true thinkers to spend hours in the labor of calculation that could easily be handled by a machine. To be clear, this was the first definitive step forward towards the master algorithm in modern times because Leibniz not only had the idea of a universal master algorithm, but he also produced a machine and a working variant of his system and even proposed a language that one could use to program the machine.

In more recent times, the idea of a master algorithm has been broken up into different camps of Artificial Intelligence. This has been best summarized by Pedro Domingos of the University of Washington and his work The Master Algorithm. In short, Domingos has stated that there are several variants of Artificial Intelligence, with each having its own “master algorithm.” He then clarifies that it will be the merger of those algorithms that will produce the true Master Algorithm. One of my favorite TV shows of recent years was HBO’s Silicon Valley, and it was in the culmination of that series that this merger of AI’s occurs, producing a Master Algorithm called “Son of Anton.” Afraid that their creation will take over the world (like Skynet), the protagonists shut it down and live out their lives in relative obscurity. Having a more benevolent view of AI, let me expand on how this merger of AIs will likely occur, but to do so I first have to explain a term, ratiocination. You already saw the term used by Leibniz in his calculus ratiocinator.

Two of HBO’s Silicon Valley protagonists who combined AI’s in “a peanut butter and chocolate” moment of innovation that creates a super AI called “Son of Anton.”

Ratiocination and the Merger of AIs

Calculus ratiocinator, Leibniz’s universal calculation framework, has in its very name the word ratiocination, which is a term derived from Latin that was used by the ancients to describe a form of reasoning with numbers that could complete and bring together other forms of reasoning, specifically inductive and deductive reasoning. This type of reasoning was explained by Edgar Allan Poe in his “Tales of Ratiocination” when he described how his hero and the model for Sherlock Holmes, C. Auguste Dupin, solved mysteries. The idea of ratiocination was further clarified at about the same time by one of the fathers of economics, John Stuart Mill in his great work A System of Logic, Ratiocinative, and Inductive. So what was this process that some of the greatest minds of math, literature, and economics discussed?

In short, ratiocination is the process by which one reasons with a predetermined logical framework and then updates it with empirical feedback. In modern terms, this would refer to merging the AI algorithms of logic with those of statistics. In current parlance, the logical algorithms are the “symbolists” who rely on algorithms of “inverse deduction,” which is a fancy way of saying that they understand the premises of a particular field and because of that they can “fill in the blanks.” In contrast, the statisticians look at the world and try to learn about it from doing statistical analysis and updating their priors constantly via probabilistic inference, which is again a fancy way of saying that they update with information about what is actually going on. To be clear, on the one side we assume that we understand the general principles and on the other, we assume we don’t and learn from the world around us. If you can unite these two camps, or as Domingos says “tribes”, then we have a learning machine.

Uniting the Tribes

This idea of “uniting the tribes” occurred long before anyone had defined tribes of Artificial Intelligence. Going back to Leibniz again, he was very clear that in order to create a functioning system that could learn, one would need to have a language that facilitated that. He called this miraculous language characteristica universalis, and such language would be able to express mathematical, scientific, and even metaphysical concepts. For any of us alive to hear luminaries like Philippe Kahn or Steve Jobs talk about it, that universal language is very likely an object-oriented language. While the promises that Kahn and Jobs laid out in the day were great , much progress was made in finance when Goldman Sachs created their own object-oriented language called SLang (Securities Language) and an associated object store called SecDB (Securities DataBase). This idea of object-oriented construction was impressed on my mind at BYU in one of the world’s few academic NeXT computer labs and then at Columbia University when I was given a stack of correspondence between some professors and Fisher Black of Goldman Sachs on an object-oriented architecture that Black was proposing to address the problems of finance. It should be noted that Black, while famous in finance circles, was first a computer scientist, having received his Ph.D. in the field, specializing in the classification of information basically into objects (another AI algorithm we call “nearest neighbor”).

So do we unite the tribes via one universal object-oriented language and data store as Goldman Sachs did or in some other way? I think it’s clear that we have to choose “some other way” right now. I say that it is clear because while the Goldman effort was successful for a time, it has proven to be less than enduring in the market due to its lack of ability to adapt to the changes that are occurring in the market. Specifically, the Goldman language and tools could not adapt to new changes in computer architecture that were necessary, such as the rise of multi-threaded processing, saying nothing about more recent changes such as the rise of GPU and other specialized processor variants such as FPGAs or IPUs (see Graphcore to get an idea about this kind of chip). I think the tribes need to be united, but in order to do so we must develop a framework to “unite the languages.” Let me explain.

Uniting the Languages

I personally think that Python is evolving into a wonderful language for rapid prototyping and exploration. It is structured just enough without being too restrictive and allows for tons of creativity to be expressed by people right out of the gate without a ton of specialized training (kind of like VisiCalc did in its day). I’ve seen it in practice when my students can pick it up quickly and get a great deal of confidence almost immediately. This is because of so many good libraries and tools such as Anaconda (thanks to former BYU professor Travis Oliphant and a handful of others for this). That said, I believe only a very inexperienced person would build a large-scale, mission-critical application on Python. I know that there have been a few “billion-dollar” deployments on Python such as Bank of America’s Quartz and JP Morgan’s Athena, but there are a couple of ideas that should be noted here. First, the dollar amount spent does not indicate success, rather it often indicates just the opposite (I remember coming into Barclays Global Investors after a failed $200 million attempt to deploy Ada). Second, Python was not doing the main lifting, but C++ and Java were. That brings me to my second language.

I think that C++ and C# have evolved into languages that capture many of the object-oriented ideals espoused by Kahn, Jobs, and Black. Those same ideals are not captured in “the DNA” of Python, where many of the concepts are simply “bolted on.” In addition, I believe that other variants of lesser-known object-oriented languages such as Ada and Eiffel present some very interesting ideas that can be utilized in large scale, stable systems like Design by Contract. Keep in mind that these ideas are more than a little related to the concept that people are starting to play around with today on Blockchain like Smart Contracts. So where does that leave us?

The ULISSES Project looking to Games Again

I’ve said in the past that we should look to the Games industry. I’ve always thought of Games as a place that attract some of the top minds in software and as it turns out, they have already dealt with this issue of merging two very different languages, as different as modern Spanish and German. Rather than try to impose some software variants of Esperanto, they have taken the rather sensible route of producing game engines that allow users to explore and interact with flexible scripting languages such as Python while having all the hard lifting under the surface is done with languages such as C++, which allow them to utilize the objects so often used in games so well. This is the obvious way forward in not only uniting the languages but also in uniting the tribes and this leads us to our most recent efforts in AIAX.

We will look to unite tools such as very customized Machine Learning tools and Inverse Deduction tools relying on structure in a common framework that makes it so that it is only necessary for users and creators to know how to communicate via Python. This goes back to the very idea of why we should be building systems as articulated by none other Leibniz himself, that we should free human beings to do what they do best: to create, while we utilize machines (both hardware and software) to do the laborious calculations for them. This precisely what we are starting on with the AIAX Initiative and I’m confident it will produce interesting and powerful results. All this said, I’m very excited to be working on this with Eric Jensen who not only is the CTO for an AI company parsing information out of the news (an excellent use of probabilistic inference algorithms) but Eric also just happens to be one of the people that worked with computer scientist Robert Kessler in creating the award-winning Games education program.

I have high hopes for this new direction of AIAX and will soon be updating the description of AIAX to reflect this new focus. Keep following for more updates.

Leave a Reply

Your email address will not be published. Required fields are marked *