The Business of AI and Machine Learning

There seems to be a lot of confusion around the meanings of “artificial intelligence” and “machine learning” by the general populace, the Silicon Valley community, and company leadership teams around the world. I thought it might be worth trying to provide some clarity about the difference between the two ideas, explain how they emerged in modern culture, and outline how directors should start thinking about these technologies in business terms. The bottom line is that if your organization has not been affected yet by these two emerging ideas, it soon will be.

Most people learned what they know about AI from popular culture, great science fiction movies, television, and literature, where primary or secondary characters were androids or cyborgs, such as Blade Runner (Roy Batty), West World (Dolores), and Neuromancer (Wintermute & Neuromancer). In recent years though, modern scientists have started to talk about the possibility of creating real AI within this century–maybe before 2050. This will be the moment when a software program becomes sentient–that is, the software becomes aware of its own existence and can make thoughtful decisions.

In 1993, science fiction writers started calling this milestone in computer automation “the singularity,” but the idea is probably best represented in the original 1984 movie, The Terminator, when Skylab wakes up and decides that humans are not necessary, are harmful, and need to be wiped out. For most, this is just a fantasy that we tell ourselves to be entertained. Recently, though, respected entrepreneurs and big thinkers such as Stephen Hawking, Elon Musk, and Bill Gates have said that we should be careful about this potential future. In the long term, yes, they are worried about a Skynet scenario; but in the short term, they are worried about job loss from the blue-collar class, where many AIs will be deployed initially.

Machine Learning

According to Sam DeBrule, a co-founder of the Voice of Machine Learning journal, machine learning is a software-development technique used to teach a computer to do a task without explicitly telling the computer how to do it. Developers use big data techniques to search through large piles of data, looking for patterns that a human would never notice. In other words, we teach the program to teach itself, and big data is the key. This technique would not work without a very large collection of data. As an example, Palo Alto Networks uses machine learning to discover malicious files–files that bad guys send to victims in order to compromise their systems.

Palo Alto Networks has been in business for over 10 years. We have a huge collection of files that have passed through our customers’ firewalls. We divide them into two buckets: known benign files and known malicious files. Our engineers then set their machine learning algorithms on the two piles of data. With a 95 percent accuracy rate, and without a human knowing what the program is looking for, our machine learning algorithms can guess whether a brand-new file that we have never seen before is benign or malicious just by analyzing the characteristics of that file.

The Turing Test

Alan Turing was a British scientist who devised the first test that would help humans decide if a machine could think. It is called the Turing Test, and it is a thought experiment that goes like this: a judge asks questions of two subjects behind a screen–one subject is a human and the other, a machine. If the judge can’t tell which subject is the human and which one is the machine, then the machine, for all practical purposes, can think.

In the modern day, we see examples of machines starting to pass the Turing Test in very specific knowledge domains, such as commercial flight autopilots, video game opponents, and online customer support. Other domains are almost there, such as self-driving cars (Tesla) and personal assistants (Amazon’s Alexa). With these emerging AIs, humans can still tell they are AIs, but we can see that it will not be very long before we won’t be able to tell the difference. The main technique software developers are using for these amazing human-like programs is machine learning. When pundits talk about the progress in the AI field, they are mostly talking about the progress, in recent years, in machine learning techniques–techniques that allow the software to teach itself.

Getting to the Singularity

Self-driving cars are just one example of developers using machine learning techniques to build an AI for a domain. In order for cars to become self-aware, however, they will have to teach themselves how to drive. Most experts think this might be possible if the AI has a large collection of machine learning algorithms running internally that can learn from each other. Some say we might see the first one by 2050.

Boardroom thinking

Most likely, the organizations for which you are responsible collect data in some form: customer data, product-usage data, web site viewer data, manufacturing data, etc. If you don’t have a team of machine learning experts experimenting with your data, you have to know that your competitors do. The businesses and organizations that get there first will win; those that never start will slowly disappear because they will not be able to keep up. Machine learning and artificial intelligence are here to stay. We might see the first singularity soon, and I know you don’t want Skynet to decide that you are no longer useful.

 To read more about Turing’s contributions to AI and his legacy, click here.