AI, Machine Learning: What Is The Hype About? Are We Ready For These?

By Kris Seeburn

I remember the days in the 80’s where the gurus and pundits of technology said we would already be in on Mars colonization, AI doing everything for us and flying cars like back to the future. I remember in the 90’s. Then with AI revolution being talked about. Microsoft came up with windows 95. Wow what a revolution it was. It was like a turn around in history. But these ideas being pushed today is nothing new but being re-used differently. Same as cloud computing, do you think it is new. Virtualization is it new. People these were already available in the 80’s. Only issue was the processing power and speed. So these were put back in the drawers. Now with nothing new to bring about we take the all drawings and lay it down with different names and present them as a WOW factor. Anyhow we will treat some of these here, mainly AI, Machine learning and the data holders such as the cloud.

With all the attention Artificial Intelligence (AI) attracts these days, a backlash is inevitable – and could even be constructive. Any technology advancing at a fast pace and with such breathless enthusiasm could use a reality check. But for a corrective state and attention it need to be useful, it must be fair and accurate.

Nevertheless, Buzzwords are part of what makes the internet go ’round, and you’d be hard-pressed to find a more popular and controversial term today than Artificial Intelligence (AI). Once just an ethereal concept that interested the nerdiest among us, AI has become a very real obsession in all corners of the tech world.

With the hype, the truth is far different: we need AI, and AI needs us. Just as the industrial revolution shifted the burden of drudge work and large-scale heavy lifting onto the backs of machines, the AI revolution will allow humans to slough off the drudge work associated with computing, enabling them to focus on contextualization and reasoning – the kinds of things humans are good at.

Furthermore, unlike the news stories that hint at an AI takeover, the technology’s real presence is not based on conjecture or predicted trends. There just so happens to be a major worker shortage in the economy today that is hurting profits and holding the economy back from its full potential.

Part of the problem with these opinions are the expectations around what is defined in “AI.” While the problem of how best to define AI has always existed; skeptics argue that overly broad definitions, and too-willing corporate claims of AI adoption, characterize AI as something which we do not have. We have yet to see the self-aware machines in 2001‘s HAL and Star Wars’ R2D2, but this is simply over-reach.

The highest aspirations for AI – that it should reveal and exploit, or even transcend, deep understandings of how the mind works – are undoubtedly what ignited our initial excitement in the field. We should not lose sight of that goal. But existing AI programs which serve lower human end functions provide great utility as well as bring us closer to this goal.

For instance, the seemingly mundane activities humans conduct look simple but aren’t straightforward at all. A Google system that ferrets out toxic online comments; a Netflix video optimizer based on feedback gathered from viewers; a Facebook effort to detect suicidal thoughts posted to its platform may all seem like simple human tasks.

Critics may disparage these examples as activities which are performed by non-cognitive machines, but they nonetheless represent technically interesting solutions that leverage computer processing and massive amounts of data to solve real and interesting human problems. Identify and help a potential suicide victim just by scanning their online posts. What could be more laudable – and what might have seemed more unlikely to be achieved via any mere “computation?”

AI is conceptually a simple algorithm, but it often works. And by the way, it’s actually not so simple to understand when it works and when it doesn’t, and why, or how to make it work well. You could make the model underlying it more complex or feed it more data – for example, all of Netflix’s subscribers’ viewing habits – but in the end, it’s understandable. It’s distinctly not a ‘black box’ that learns in ways we can’t comprehend. And that’s a good thing. We should want to have some idea how AI works, how it attains and uses its ‘expert’ knowledge.

But is it really? When one looks at AI and realizes it turns out to just program — of course, it is just “programs,” but that’s the whole point of AI.

To be disappointed that an AI program is not more complicated, or that its results aren’t more elaborate – even cosmic – is to misstate the problem that AI is trying to address in the first place. It also threatens to derail the real progress that continues to accumulate and may enable machines to possess the very things that humans possess, and that those criticizing real-world AI as too simplistic pine for volition, self-awareness, and cognition.

In the end, all scientific endeavors, including AI, make big leaps by working on more basic – and perhaps, only in hindsight, easier – problems. We don’t solve the ultimate challenges by jumping right to working on them. The steps along the way are just as important – and often yield incredibly useful results of their own. That’s where AI stands right now. Solving seemingly simple yet fundamental challenges – and making real progress in the process.

The best way to approach AI is to examine exactly what it can and cannot do. There are dozens of technologies that fit under the rubric of AI, among them chatbots, neural networks, machine learning, natural language processing, swarm intelligence and sentiment analysis. The more we examine AI, the more we realize that it needs a guiding human hand to fulfill its potential.

Sentiment analysis: A technology that seeks to understand general attitude, meaning or opinion behind text or other types of content (videos, articles, social media posts, etc.). Sentiment analysis is used for many purposes, such as gaining insights into customer preferences, political preferences or even looming security threats, as social media posts of prospective criminals or terrorists are parsed.

Chatbots: An application that uses natural language processing, sentiment analysis and other AI tricks to talk to you online. Chatbots became famous in mid-2016 when Facebook backed the technology for use on its Messenger platform. Since then, thousands have come online. A customer could query a retail chatbot for information or a service (“Show me black loafers, size 10.”). The chatbot would either respond by performing the request, or asking follow-up questions (“Are you interested in penny loafers or drivers?”).

Machine learning (ML): A system that can self-improve with exposure to useful data. ML can, on the basis of existing data, determine patterns, build models and even make predictions. An ML-equipped piece of software or robot could theoretically be programmed to do a job by itself, and get better and more efficient as time goes on. ML-powered robots are just a few years from being able to fold your laundry without help, and a few decades from performing surgery by themselves.

AI: NOT THE END, BUT A NEW BEGINNING

While it’s tempting to believe AI will put humans out of work, history tells us another story. Machines such as automobiles, sewing machines and the printing press did not put people out of business, despite peoples’ worries at the time. Instead, they actually created new jobs and industries, freeing humans up from drudge work and allowing them to blossom in higher-level, more creative jobs. Without the human touch, those machines could not have done their jobs as effectively.

Meanwhile, even with all the technology, the number of jobs available has grown exponentially. Between 1960 and 2005, for example, total employment in OECD countries rose by 452 million jobs, representing a 76% increase in the proportion of the population employed. Many of those jobs involve a human-tech partnership.

Technology gives humans much greater ability to work efficiently by allowing them to concentrate on the creative aspects of work that help businesses thrive. Of course, nobody knows what the future will bring, but experience, as well as experts who have good records of predicting trends, tell us: The best applications and jobs are yet to come.

Big data for example refers to the large volume of data that inundates businesses daily. The opportunity lies within the ability to unlock the meaning behind that data as its volume increases.

We use data analytics to inform on trends, understand variability in in your processes, identify operations that have moved outside accepted statistical limits, and provide early identification of possible bottlenecks and capacity constraints.

Machine learning is about analysing big data in a strategic way, providing insights that lead to better decision-making. This in turn can create significant cost savings, reduce risk, improve efficiencies and the potential to identify new opportunities.

The applications of machine learning are virtually endless. As an example, we have successfully applied the technology to a classification problem. For an unidentified data point, SMS machine learning will predict the category to which the data point belongs.

It goes on and on. Suddenly, your AI code, your supposed crown jewels, is just a small cog in a very large and complex and buggy machine.

Raw data is like liquid. It becomes digitized and is gathered before it enters the production pipeline. Next, the streams of data need to be processed by tools such as Apache Kafka or Apache Storm before they can be stored using something like Hadoop. The data – whether it’s an image, text, or sound file – then needs to be extracted, transformed so that it’s formatted in a way that it can be used for vector calculations, and loaded into a neural network for training.

After training, the model’s inference code is tested with more data to check its performance and accuracy. In other words, you take the freshly trained AI, give it some queries and check the responses against what you’d expect it to output.

It doesn’t stop there; the final stage is to scale up. The system described thus far is packaged into a microservice so that the AI model can be spun up thousands, if not hundreds of thousands, of times over on many servers to cope with demand. Imagine a system checking credit card transactions for fraud detection: it needs to cope with millions of purchases coming in.

It’s likely that small businesses like startups will hype the inference stage of their AI system and “hand-wave” the rest. They won’t be dealing with enough data to bother with all the infrastructure and scaling up, so ideally they’ll just outsource it to the cloud. “But as they grow, they will need to worry about those problems.”

Cloud platforms such as Google Cloud, Microsoft Azure and Amazon Web Services are all competing to offer AI as a service. They provide users with pre-trained models and a way to generate more tailored models by hooking together different tools, such as image recognition or natural language processing.

Using AI models through the cloud is restrictive and expensive. Neural nets learn iteratively and require intense training over several GPUs. Doing this through the cloud can cost anywhere between $15,000 and $30,000 per model.

“Startups who can’t afford this will use pre-trained models downloaded from the internet and then they customize them – that’s typically what they do – then upload the model to the cloud.”

The advantage is that small teams don’t have to go around looking for computer scientists who understand machine learning to kickstart their idea. But they will be at a disadvantage, since they are constrained by the prepackaged models.

AI is not for the masses

Artificial Intelligence (AI) has left the confines of science fiction to shape the contours of our lives. While Cortana, Siri, Alexa and other “intelligent assistants” help us perform all sorts of tasks, tech startups relentlessly launch other AI-driven products to tackle a growing list of human concerns — from personal finance (robo advisors) and customer service (chatbots) to romance (dating bots) and health (medical bots). Given the spreading influence of AI, Alphabet’s Eric Schmidt recently remarked that we’re inexorably heading towards the “Age of Intelligence.”

But, have we truly imbued machines with intelligence?

The answer depends on how you define AI. The term “AI” is widely abused and used to describe all levels of automation, even rule-based scripting. AI experts maintain a much higher bar, setting “artificial general intelligence (AGI)” and the most strident variants of the Turing Test as the field’s Holy Grail. Making things murkier, there are other terms to consider: strong AI, weak AI, machine learning, deep learning – what do they all mean?

Nearly all of the AI-driven systems running today can be classified as “Weak AI” or “Narrow AI”. In his book The Singularity Is Near, computer scientist and futurist Ray Kurzweil defined weak AI as AI that is proficient in one area.

A narrowly intelligent program could become World Chess Champion or the planet’s top Go player – but still suck miserably at other tasks such as distinguishing and analyzing images. Deep Blue (the first AI world chess champion in 1997), Alpha Go (the top Go player since 2016), Alexa, Siri, and the chatbot who booked your last vacation are examples of Weak AI. So far, it’s the best we’ve come up with.

On the other hand, “Strong AI” has two alternate definitions. The term was originally coined by philosopher John Searle in his paper Mind, Brains and Programs, defining it as a programmed computer with a mind in exactly the same sense human beings have minds. The term originated from his famous Chinese Room argument which posits that Strong AI cannot exist since no program can give a computer a true “mind”, regardless of its intelligence. Searle probed the issues as a philosopher, but few AI researchers or computer scientists really care about the distinction between a computer with a human “mind” and a computer with a mind that behaves indistinguishably from that of a human.

More recently, Strong AI has emerged as an antonym for Narrow AI, which is driven largely by machine learning and deep learning. Machine learning (ML) refers to software applications that can learn and make predictions from data, but without being explicitly programmed to do so. Deep learning, a subfield of machine learning, has recently been able to achieve breakthough performance results by utilizing mathematical architectures loosely inspired by how neurons work in the biological brain. Most of the best performing AI systems today are built deep learning, also known as “deep neural network”, algorithms. Nvidia has a great post going into further detail about the differences between machine learning and deep learning. “The creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability.”

WHAT IS GENERAL INTELLIGENCE?

The reason it has been so problematic settling on a definition for AGI is primarily a result of the difficulty in defining general intelligence itself. General intelligence need not even be human-like and its potential breadth makes it hard to define and harder still to characterize through tests and metrics. Indeed, the past two decades have seen many approaches taken, none of which has taken hold as the ideal.In a 2005 article, Nils Nilsson proposed a pragmatic approach, wherein any AI that can

carry out the same practical tasks as a human can be considered to have human-level intelligence. This presupposes that human-like intelligence is the goal, which is true in many practical senses. A psychological approach to GI, under analysis since the early 20th century, also relies on a human baseline, but attempts to isolate deeper underlying characteristics that enable pragmatic results. This approach is exemplified by Gardner’s theory of multiple intelligences and more recent work describing human cognitive competencies.

The adaptionist approach states that a greater general intelligence is demonstrated by a greater ability to adapt to new environments, particularly with insufficient resources. This approach raises a new debate on whether the intelligence of a system is in its ability to achieve results or in using the minimum output to do so. Similarly, the embodiment approach holds that intelligence is best understood by focusing on the modulation of the body-environment interaction. An intelligent system operates within the rules of its environment to produce optimal behavior.

More esoteric approaches include the cognitive architecture approach which develops requirements for human-level intelligence from the standpoint of cognitive functions such as knowledge and skill representation, reasoning and planning, perception and action, etc. Finally we have the mathematical approach which attempts to define intelligence based on the reward-achieving capability of a system. In this highly generalized approach, humans are not taken as a benchmark and are indeed far from maximally intelligent.

HOW CAN WE TEST FOR AGI?

Given the difficulty in achieving a universal definition for AGI, it’s no surprise that developing a single test or metric for its presence is equally controversial. Further complicating matters is the importance of external environment in analyzing an AI’s behavior.

Numerous tests have been proposed beginning with the Turing Test put forth by Alan Turing in 1950. In this test, a machine passes if it is able to successfully imitate human conversation and fool an evaluator. A similar version, the Virtual Turing Test plays out the same scenario through avatars in a virtual world. A Text Compression Test challenges an AI to compress a text by recognizing and understanding patterns contained within.

There are also various tests challenging an AI or robot to accomplish human educational goals – graduate from an online university, graduate from a physical university, or win a Nobel Prize. Some of these clearly overshoot human-level intelligence as few humans are Nobel Prize winners. Practical tests have been put forth by those advocating for a pragmatic approach to AGI. The Coffee Test, proposed by Apple co-founder Steve Wozniak, asks whether an AI can enter an average American home and make a cup of coffee. Similarly, the Employment Test challenges an AI to hold down a human job.

While settling on a test for AGI is difficult, establishing a metric for partial progress toward AGI is exponentially more so. Practical tests have been proposed, such as putting an AI through elementary school or using the coffee test, but these can be easily gamed by systems designed with the test in mind. Some researchers suggest that it is fundamentally impossible to quantify progress toward AGI due to the principle of “cognitive synergy”. A fully functional AI maybe achieve 100% on a test while a 90% functional AI may only score 50% on the same test.

All-in-all, achieving AGI is an extraordinary undertaking and a far cry from the “AI” currently touted by start-ups and marketers. Developing a true artificial intelligence, and establishing it as such, is an ongoing challenge that continues to be hindered by difficulties devising definitions, metrics and tests. However, its inevitable arrival promises to herald a new era in human-machine interactions and perhaps force a redefinition of “intelligence” itself.

We need to admit that the Interest is soaring. There’s a lot of hype, and the reality is that the technology is still very raw and difficult to implement in production. Making the jump from prototype to product introduces new challenges.

  • ·     Where does the training data come from?
  • ·     How is it stored, organized, sanitized, and prepared for teaching a system?
  • ·     How do people query the system?
  • ·     Who can query it?
  • ·     What about security: how is sensitive information managed and protected?
  • ·     How fast does my hardware need to be to deliver results?
  • ·     Where are the performance bottlenecks and concurrency hurdles?

AI may be getting there but it is not the same as we had imagined them in the 80’s or in the recent days as self aware AI. There is lots to become of AI and Machine learning. I still have not understood why people are hyping these concepts and approach as the general way forward. Why is it important for bigger oganisations with data to crunch it may make sense but still a lot of work still needs to be achieved. Another aspect is pulling Auditors into a great dilemma of Data Analytics with AI and Machine learning. Come on people more like more than 90 percent of the industry in general are yet to this type of thinking when we know about the same number have process issues that needs to be corrected and redesigned. The pundits keep flowing ideas and more and more making people more confused on where to start and where to stop.

Related Posts