paint-brush
The Intelligent Automation Journeyby@sombando
1,052 reads
1,052 reads

The Intelligent Automation Journey

by Soumitra BandyopadhyayDecember 5th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Every now and then a new technology comes along and claims to be the best thing since sliced bread. In the ’90s, anything ‘online’ was better than it’s offline version. Few years into this century, Bluetooth came along and promised the world — for a while, it was the go-to technology for charging an arm and a leg. Every household tech, appliances, even items of furniture, cars, clothing, and accessories were offered with Bluetooth.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - The Intelligent Automation Journey
Soumitra Bandyopadhyay HackerNoon profile picture

Is your organization on it yet?

Every now and then a new technology comes along and claims to be the best thing since sliced bread. In the ’90s, anything ‘online’ was better than it’s offline version. Few years into this century, Bluetooth came along and promised the world — for a while, it was the go-to technology for charging an arm and a leg. Every household tech, appliances, even items of furniture, cars, clothing, and accessories were offered with Bluetooth.

What is better than a toaster? — A toaster with bluetooth.

Likewise, on the enterprise front, cloud computing has been the transformative force since the beginning of this century. The cloud revolution led to the ubiquity of processing power and access to large datasets. This, in part, contributed to the revival of an age-old algorithm called neural networks. As is widely reported, the neural networks and the deep learning algorithms catapulted this heavily researched but lesser known stream of science, called artificial intelligence, into a mainstream technology.

Arguably, one of the most transformative thus far — artificial intelligence or AI — is a paradigm shift in that the algorithms are designed to ‘learn’ from data much like humans, and eventually achieve or surpass human-like ability in thinking and decision making. We experience applications of AI in many fields today, starting with gaming where the term AI first started becoming mainstream, to Alexa, or Siri, to the many autonomous cars on the roads, to all of the social networks Facebook, Twitter, and Instagram that is run using AI.

What is better than a toaster? — A toaster with AI.

Entering the age of AI — learning, of course, remains a crucial aspect of AI and unfortunately, these algorithms learn a heck of a lot more than we realize about us and our activities on the platform over time.

The AI algorithms, therefore, do not contain the answers nor are they created to calculate the answers. Instead, they are containers with structure and linkages that can ‘learn.’ It is like our brain, or it’s closest software counterpart that is built with just code and needs hardware that can support a lot of processing power and needs data to run on. The intelligence is acquired over time through training by feeding the right data — remember the algorithms are only as good as the data they feed on — literally. Similar to how we went to school at an early age, were fed with a lot of data and knowledge, and given with progressively harder challenges to solve such that our intelligence could evolve. An AI system is similar to our brain in the way it acquires intelligence, incrementally, through training.

The anomaly about AI, that is not so widely discussed, is that unlike previous technologies, AI is not an instant solution, at least not yet.

It is a capability that ‘learns’ over time using data relevant to its use cases, most likely data from an organization, to then produce the insights and benefits relevant to that same organization. Therefore, to properly leverage AI, it has to have pipelines of data that can be fed to the AI algorithms to train them in the first place. And no one is surprised that most organizations still have siloed unstructured data, to begin with. Therefore, the journey of AI for many organizations starts at a much earlier point — digitalization and creating data pipelines. It’s a journey — and where you are currently at and when you begin, determines when you start to receive the benefits and after that when you reach the peak.

AI is often associated with IA — another similar sounding acronym that stands for Intelligent Automation — which involves applications of AI, ideally to be used with incremental levels of complexity, for achieving desired benefits of automation and autonomous operation.

Cliff Justice of #kpmg, in an article, forecasts the market for Intelligent Automation to grow from $12.4B in 2018 to $231.9B in 2025 — that’s more than 17 times, or 1,770% growth in 7 years. This is one of the clearest indications that most companies are rushing towards adoption, quickly.

Source, advisory.kpmg.us

Nonetheless, as pointed out before, not all organization are equally poised to jump into an automation journey. It starts with data, more so availability of the right kind of data. Let’s review in further detail, how each of the stages — digitize, analyze and then automate — can be implemented incrementally to embed AI into the organizational processes.

Digitize — everything, right away!

With the explosive growth in the use of digital devices, we have massive amounts of data, and we are generating new data at a maddeningly high pace. The global data volume is predicted to grow to 5 times its current size in the next 7 years — from 33 zettabytes in 2018 to 175 by 2025. In case you stopped counting, 1 zettabyte is 1 trillion gigabytes. Much of the increase in data is contributed by IoT — things or devices that are connected and are always generating data.

So what do we do with all that data?

Analyze it? Okay.

The unfortunate part is that a majority of our data is unstructured — at current levels, roughly 80% of it — that cannot be readily analyzed. For example, many of the repetitive tasks that we carry out, let’s say booking the next week’s trip or filling an expense report from the last trip, etc., deal with fragmented and unstructured data — you may have a paper bill, another email receipt so on so forth. Since all of the data is not received from or stored at one place or with one pre-defined model, it never gets automated fully and continues to require our attention and time to carry it out.

Therefore, before launching a journey into AI, every organization should have a digitalization and data governance plan in place to ensure that the highest possible proportion of the data generated by the organization can then be used for building analytics that maximizes the utility of and the learning from the data it produces.

Analyze — in five steps

Data analysis can become increasingly complex depending on the goals from the analysis; it could be hindsight, foresight or insight —

  • Fact-finding — focuses on discovering and describing the past — what already happened and why did it happen. This is the fact-finding mission so to speak and the most common type of data analysis that we see across our day to day tasks, including reports, drill down queries and alerts.
  • Forward-looking — requires more scientific rigor and complex data modeling, including statistical modeling; to deduce what is likely to happen and then, recommending the best approaches to deal with it, such as statistical analysis, forecasting, predictive modeling, and optimization.
  • Scenario Analysis — most advanced level of analysis; the goal is to come up with outcomes based on scenarios of inputs or actions taken. The models often learn from data to create the reasoning behind its recommendations or executions for specific situations. This is the most complex of the bunch.

AnalyticsContinuum1 flickr photo by sombando shared under a Creative Commons (BY-SA) license

Automate — in six stages

The Industrial revolutions came in waves, and each of them brought in paradigm shifts in innovation and automation. Starting with mechanization through steam engines in the 1780s, to electricity and mass production beginning in the 1880s, to computation using logic gates and use of computers, in general, starting in the 1970s. Finally, the revolution of the cyber-physical systems that we are presently in achieves automation through better utilization of data — through gathering better learning and insights from the data and then integrating that insight to the physical systems using networks and the internet.

Industrial Revolutions; This Photo by Unknown Author is licensed under CC BY-SA

Therefore, the premise is that if we can analyze our data more and use insights from it, we can automate more of what we do. The biggest impediment, of course, is that a large proportion of the data remains unstructured, to begin with — e.g., stored in a manner that is not pre-defined and thus cannot be readily used to integrate and analyze at scale.

The Automation continuum, therefore, begins at a process where fragmented data can be converted to ‘structured’ data with very minimal change in IT landscape.

The structured data can then be used to analyze and learn from following the analytics continuum with increasing levels of complexity and learning from the data.

1. Robotic Desktop Automation or RDA

Software mimicking human action, like ‘macro’ in excel, can be recorded once and then run as many time as needed to execute multiple steps connecting workflow across many fragmented applications that are accessed using a computer, or desktop. Generally used to record rules-based repetitive tasks. Similar to a macro, this is a flexible attended process and can be designed to pause for user entry or decision during the execution. Processes involving few or many steps can be automated alike, using this rather cost-effective software, without ever needing to invest in large IT systems — and achieves gains in efficiency, structure, and control over the process.

2. Robotic Process Automation or RPA

Similar to RDA however automated further such that user entry is not needed and thus the entire process is automated end to end. For example, in situations for matching paper documents, OCR — Optical Character Recognition software, is used to execute without manual intervention.

automationcontinuum flickr photo by sombando shared under a Creative Commons (BY-SA) license

3. Digitized RPA

An extension of RPA with self-serve capabilities such as communication via IVR, mobile, web, etc. to enhance automation

4. Machine Learning or ML

Software containing specific algorithms that can learn from data provided to it, through a process called ‘training,’ and then used for the predictive and prescriptive type of analysis that can replace or augment human decision making. Generally integrated with an RPA solution to enhance automation

5. Cognitive Solutions

ML system further integrated with process execution capabilities. Therefore, processes that require decision or judgment from a human before can now be executed seamlessly without human intervention

6. Narrow AI, Broad AI (Artificial General Intelligence)

Expansion of AI’s scope and application from a few processes to a broad set of domains to enable ‘deductive’ or scenario analysis. Artificial general intelligence is a further expansion of the range of learning algorithm where it can mimic or surpass human capability to learn across many subjects and domains. This has not been achieved yet and is theorized to take place in the future.

Note that RPA/RDA converts information from siloed processes and unstructured data into a specific data model, therefore, can then be used as an enhanced level of automation using ML and AI, etc.

Starting your automation journey— in five phases

Unlike previous waves of automation, development, and integration of AI is unique since it is not like a plug-and-play process, at least til now. Including ‘learning’ specifically for an organization’s set of data, the development, integration, interaction, governance processes happen in stages, each of which has its own set of ‘learning’ so to speak. Rashed Haq’s recent article on Bloomberg lays out best practices for Enterprise AI. There are several stages in which the implementation takes place, assuming the required data pipeline is built and fed properly —

A. Experimentation Phase

There are many novel AI infrastructure nowadays including IBM Watson or Google Tensorflow. You may think of those infrastructures like preformatted powerpoint slide template — you select how many slides, format, theming, etc and that’s all — the actual content still comes from you.

Similar to how copying from another sales deck looks off and suboptimal — AI algorithms produces better results when not copied, rather, custom developed for each application. At least for now.

These infrastructures provide with architecture on which you need to build upon — e.g., developing and training the systems with data that is specific to your application. The development processes thus far resemble Ph.D. level research where the stages are iterative and take place mostly like trial-and-error experimentation including repetitive validation and testing phases for accuracy and bias. Unless one can find AI, that’s an exact match of another organizations problems and required solutions — which is very rare like the powerpoint slide deck — the chances are pretty slim that a trained AI system can be copied from another organization that produces optimal results.

B. Deployment Phase

Once the AI system is developed, it will take time to adapt business processes to embed AI into the workflow. Unless the AI is within a system that is already implemented, let say SAP, the AI output is required to be applied within the day to day processes and workflows. This needs ‘business process re-engineering’ or ‘change management’ — and any of us who work in any organization with a team named as such know how difficult and time-consuming those can be and become.

C. Integration Phase

Assuming we are following the automation journey depicted before — most organizations new to AI would first need to implement AI to ‘augment’ human decision making and action and not have an end-to-end solution. Therefore, the human counterpart has to learn to work alongside the AI recommendations and get better at it over time. It involves learning.

D. Governance Phase

Since AI is mostly aimed to be used starting predictive and prescriptive applications, even though the data it learns from is past data, it is very crucial that the systems are kept up to date by feeding and retraining with newer and more recent data. As well, AI systems are like black boxes — in that it is not understood or visible as to how it is making specific learning — therefore, one has to monitor the recommendations it produces and sift through them to keep a check whether it is developing any unintended biases over time.

E. Scaling Phase

Undoubtedly, due to these learning stages, building and deploying any AI system is much more nuanced and staged versus any previous technology. However, once the pieces come together, it can be scaled pretty quickly.

Motobot — a case study in autonomy

Valentino Rossi is one of the best motorcycle racer humanity has ever produced. He is the only rider in the history to have won the world championship in four different classes — 125cc, 250cc, 500cc, and MotoGP — and one of the most successful champions of all time with nine Grand Prix World Championships. MOTOBOT is a 3-year-old humanoid robot created by Yamaha Motors capable of autonomously riding a motorcycle around a racetrack.

Can Motobot beat you around the track? Yes.

Can Motobot beat Rossi? No, it can’t…yet. (Go, Rossi!)

Humans — 1, Robots — 0

Yes, the Yamaha Motobot is real and is becoming faster and better over time. It's not far when the Motobot will beat Rossi — not because Rossi is getting old but because the Motobot is fed with more data and is learning to become better.

Once the Motobot learns to beat Rossi on the track — many such Motobots can be created almost immediately each of which will be able to beat Rossi. Compared to humans, AI is infinitely more scalable — it is just a software that lends itself of being copied. Similarly, companies, in any industry, that are ahead in the game of implementing successful AI systems can immediately scale these systems to become invincible and much harder, if not impossible, to beat.

However, another company or application, let’s say a scooter-hailing company, may not be able to replicate the Motobot AI to power and autonomously drive their two-wheelers on roads — since roads present completely different challenges compared to a track and the goals are quite different too. Though both use cases are loosely similar — for autonomous two-wheelers — the use cases are different and therefore, each has to build their own.

Therefore, most AI systems are custom built, however, once built, are infinitely scalable for the same application.

Motobot Vs. Rossi, source — youtube.com

Summary

The industrial revolution of the 21st century is upon us — brought by AI and its applications in automation and autonomy. It is transforming how machines are built and used in augmenting and benefitting the human civilization. Though it is far from being a fully developed technology thus far, the predicted exponential or unlimited potential of the advantage of successful AI systems brings along questions of ethics and future of our civilization — whether in the wrong hands, AI systems can cause irreparable harm to us or the civilization, willingly or not. It is a bigger question.

However, the industrial revolution of the 21st century is going to be fueled by data — and a lot of it. The biggest challenge of course is that data remains unstructured and unusable. Digitalization to capture data in a structured format and to employ RPA-like automation are steps, in the right direction. Organizations that are better prepared with their data are already ahead in the game. AI systems are different to other past technologies in that they are, at least as of now, require long drawn out processes of custom building and implementing the algorithms for every application through several iterative stages and processes.

It is evident that this time, the quick second-mover will continue to remain at a disadvantage due to the high time requirement and the layered learning curve for developing and benefitting from AI systems.

Therefore, you should worry if your organization is not already trying and testing AI systems and somehow waiting for the technology to mature. Companies that are using the technology today would propel forward once mature, and it would become challenging to follow or to catch up and have an advantage.

So, what do you think? Is there anything you recommend we add or change? Please feel free to ask if you would like any clarification or additional information. You can contact or follow me via Email, LinkedIn or Twitter.

Thanks for reading. If you enjoyed the article, feel free to like or share, so that others can find it too.