AI: Fiction vs. Reality
 
 

AI: Fiction vs. Reality

Machine Learning supports business

 

AI and machine learning could change business operation - and it's nothing like the AI of movies.

 

Do you remember the first time you saw a movie where a bot with artificial intelligence (AI) became self-aware, surpassed human abilities, and exhibited unexpected behaviors? For me it was The Terminator and the idea of the bot rebelling and turning against humans gave me chills. On the friendlier side, Wall-E warmed our hearts with the idea that bots could feel human emotion. And who could forget the utterly absurd situation between Dr. Evil’s Fembots and Austin Powers? Our society’s fascination with the possibilities of AI is sprinkled all over our popular culture, and AI characters are used to evoke feelings like joy and warmth, and more commonly, fear and dread.

 

The sheer quantity of film and novel plots revolving around AI, coupled with the fact that technologists are constantly making advancements in the realm of AI, makes it appear that we should be on the brink of advanced, self-aware bots surpassing human intelligence, right? Well, not exactly. Replicating human consciousness is extremely complex and not the primary focus of AI development at the moment. For now, self-aware AI bots and the category of “generalized AI” remain firmly in the imagination of science fiction storytellers. Instead, development has focused heavily on “weak AI” solutions or machines that have been programmed or trained to perform specific tasks. For the purpose of this conversation, I’m using the term “AI” in reference to computers that have been trained to perform a specific, narrow task, and not as a program that approaches anything like the human mind. In this area, we’ve seen tremendous progress, and AI technologies are used to improve and benefit society in many ways.

 

All of the current advancements in the world of AI rely on a subset of artificial intelligence called machine learning. Machine learning is the method of training machines using data sets and an algorithm to build a model that is later tested for accuracy; the machines then use this model in a predictive fashion with new data. There are various channels of machine learning, depending on the problem that needs to be solved and the desired outcome.

 

As a business-driven executive working in an organization whose relationship with technology is inextricably linked, I see machine learning driving value in the business world on a daily basis.

 

In business operations, machine learning can be leveraged to find trends and anomalies regarding employee hiring and retention, financial forecasting, and IT operations analytics. In retail and service industries, machine learning is being applied to improve customer care interaction and improve marketing success by predicting consumer behavior. Successful application of machine learning comes from having qualified resources with the knowledge about how to run, monitor and test these systems to ensure precision and recall. In order to have a competitive advantage in the marketplace, adoption of this technology is becoming essential.

 

Research involving stroke prevention is one area where machine learning is being used that is of personal significance to me. Research using machine learning pattern recognition techniques has shown promise to better predict which patients are at high risk for a cardiovascular event, such as a heart attack or stroke, as compared to conventional risk prediction models. Predicting cardiovascular events has been frustrating for physicians up to this point, because about half of patients suffering from an event don’t line up with traditional risk factors, including high blood pressure or cholesterol levels. Machines can see patterns in data that may not be evident for one physician reviewing the records of one patient. In one study, scientists created an algorithm and trained it on 75% of a large set of historic patient data; it was able to predict with 72% accuracy which patients in the remaining dataset had a cardiovascular incident. This offered a significant improvement over the standard predictive measures which, in the real world, would have translated into 355 people that could have received preventive care to reduce their risk. This is just one of many AI studies going on in the realm of healthcare. Using AI technology hand-in-hand with physician knowledge and experience can create better preventative care and diagnostics. There are still a few hurdles to overcome, including changing regulatory rules, accessing the patient data needed to train the machine learning systems, and getting doctors to incorporate this new technology into their practices, a process that will likely involve incentivizing the transformation in some way to spark the change.

 

AI technology has the potential to revolutionize many industries if developers create solid tools that focus on the end user, get buy in from business leaders, and properly test and tune their products. In the creation stages, it will be critical for developers to focus on the end customer. Tools should be accessible, usable, and friendly, and avoid the “creep” factor associated with some AI technology. (The “creep” being the unsettling feeling of interacting with a bot that feels too human, when you know it’s a machine.) As mentioned above, it is also important to focus on the buy-in of business leaders. Careful efforts need to be made to secure the support of key figures that will act as the change champions for these new technologies. These users need to understand the full business reason behind the technology and the benefits for customers and future business health. To avoid unsettling interfaces, embarrassing errors, and inaccurate functionality, AI technology should be tested and tuned both during development and after it is released, to ensure that it is operating as expected. As AI tools crunch more data, the machine learning algorithms may need to be adjusted to produce worthwhile results. If left unchecked and unmanned, AI tools run the risk of becoming irrelevant.

 

One humorous example of the importance of testing occurred recently. In June 2017, reports of “Facebook bots going rogue” began to surface. These reports proved to be highly sensationalized; in reality, the Facebook Artificial Intelligence (FAIR) unit was simply acknowledging their innocuous finding that when you give AI bots a goal of negotiating with one another, but don’t specify a communication language, the bots will adapt in the best way they know how – by creating their own form of communication. In this case, unsophisticated gibberish. The researchers hit the pause button, reset the parameters to force English communication, and proceeded with testing. The rest of the world, however, was enthralled and quick to make comparisons to the behaviors of fictional AI characters. The Facebook Artificial Intelligence project is an example of machine learning in a highly-controlled research environment, but even in real-world machine learning instances, humans are in control and constantly testing, supervising and adjusting algorithms to ensure they are producing targeted outcomes.

 

For all the entertainment that fictional AI characters bring our culture, it’s important to remember the reality of AI (or more accurately, the reality of machine learning), the improvements it has produced thus far and the exciting applications under development. The value of this technology really comes in the way it allows humans to elevate the level of progress made in the world. Machine learning tools focus on tedious, data-driven tasks making way for humans to apply our creativity and intuition to solve problems. Humans and AI make a powerful team, but don’t forget that humans are definitely in charge (I’m talking to you, Terminator!).

 

Follow Christian O'Meara on LinkedIn