Artificial Intelligence and the Intelligent Investor

Artificial Intelligence (which we will shorthand as AI) is the rage of the venture capital industry. For the past decade, our industry’s mantra was “Software Eating the World. The new mantra is “Artificial Intelligence Consuming the World.”

Our mission here is to make sense of all this for investors. Just what is AI (or, for that matter, intelligence itself)? Where is it now? Where is it heading? What does all this mean for investors?

What is Intelligence?  How is it demonstrated?

Christopher Evans, in his illuminating book, The Micro Millennium (published in 1979 by the Viking Press), defined intelligence as “the ability of a system to adjust appropriately to a changing world.”  Our world is in constant change.  “The more capable a system is of adjusting — the more versatile its adjusting power – the more intelligent it is.”

He asserted that there are six factors that enable and determine the extent of ability to adjust and thus degree of intelligence.  These factors are:

  1. Sensation — the ability to capture data
  2. Data storage — how much information can be stored, how fast, and for how long
  3. Processing speed — how fast the information can be handled and moved into use
  4. Software modification capability — the speed and ease with which programming can be altered and/or new programming produced, without external intervention, when conditions require
  5. Software efficiency — the ability to run “the program,” error-free or nearly so, using as little of available processing power as possible
  6. Output versatility — the range of tasks that the software enables to be performed

A seventh factor in intelligence, identified since Evans first wrote, is the ability of a system to duplicate its own software and hardware.  This is essentially what human DNA accomplishes.   DNA duplication — the human software — results in development of new cells and eventually a new human being — the hardware.

So what is Artificial Intelligence?

There is no one definition that everyone agrees on, but there is enough commonality to satisfy investor understanding needs.

Merriam-Webster defines AI in two ways:

  1. a branch of computer science dealing with the simulation of intelligent behavior in computers
  2. the capability of a machine to imitate intelligent human behavior

Wikipedia further fleshes out a practical definition. “Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. . . Colloquially, the term ‘artificial intelligence’ is applied when a machine mimics ‘cognitive’ functions that humans associate with other human minds, such as ‘learning’ and ‘problem solving’.”

As increasingly advanced computer capabilities entailing AI become routine, such as optical character recognition, we tend to dismiss them as examples of AI. Wikipedia calls that the “AI effect,” leading to the quip, “AI is whatever hasn’t been done yet.”

Technological foundation supporting AI

AI began as an academic discipline 62 years ago, in 1956. Its scope was limited in its early days by the limited capacity of computers themselves.

The principal focus was on creating expert systems.  An expert system required inputting sufficient data so that a computer could provide an answer equivalent to what an expert in a particular field would provide, such as in medical diagnosis.  The reasoning was that, if a computer had sufficient expert data fed to it and also expert approaches and rule-based guidance, the computer would produce the same output as a human expert.

This worked partially in some fields, but ultimately didn’t get much traction because of insufficient data available prior to the advent of “big data” sets along with the ability to store, retrieve, and process them fast enough. Also, the computer had no real ability to learn continuously.  Over the past forty years, AI research has focused on solving these problems, particularly the problem of continuous learning. The other constraints have been resolved as computers have evolved over time.

Readers “of a certain age” can remember when a simple computer required a huge amount of space, powerful air conditioning, and patient programmers juggling stacks of computer cards. Now we can hold a powerful computer right in our hand and store its mini-version, the smartphone, in our pocket.

Over the past sixty years, computer processing power has increased over one trillion times.  Computers are storing ever greater quantities of and increasingly more complex information in the form of structured and non-structured data and processing it at faster and faster speeds. The smartphones we all carry around have more processing power than the computer that guided our nation’s first manned moon launch.

Quantifying all this intelligence

Using his initial six-factor definition of intelligence, Evans set up a scale to rate various entities on intelligence.  This scale is useful in demonstrating the difference that exists between human intelligence and computer intelligence and helping to understand how fast that intelligence gap is closing.  When Evans’ The Micro Millennium was published in 1979, if humans’ intelligence were rated at one million on his scale, computers would have been at about 1,000, beating humans only in processing speed.  In other words, on his scale, in 1979 humans were considered one thousand times more intelligent than computers.  Computers then were big, fast . . . and dumb.

That vast difference should not have given humans cause for much comfort. Evans noted that it took humans 200,000 years to reach our level of intelligence. The computer was first conceived less than 200 years ago.

Since 1979, in less than 40 years, the computer has made huge strides. They include that one trillion multiple improvement in processing power, vastly increased storage capacity, size and cost improvements that make data access almost free, and complex programing that is beginning to approach human intelligence.

Not only is the gap with human intelligence narrowing, but the rate at which the gap is narrowing is accelerating.  Many knowledgeable analysts and futurists believe that within several decades a computer will seem at least as intelligent as a human.  Even if that prediction is too ambitious, most believe that within a century our computers’ intelligence will have surpassed our own human intelligence.

According to Evans, this will be aided by some important advantages computers have over humans. Humans have had to develop, possess, and “run” their own huge “software” packages to reproduce, eat, see, hear, maintain bodily systems, and so on.  An intelligent computer does not have these ongoing requirements. Its developers and the computer itself can concentrate largely on the software required for intelligence.  For a human this would be similar to having just a brain without a body

Computers now listen to us, talk with us, and assist us in making our lives more pleasant and helping us make more informed decisions.  They have become virtual personal assistants, and soon will seem more and more like humans as the emerging field of artificial intelligence advances.  Soon computers may pass the test where, for someone who doesn’t know that a computer is “speaking,” we may not be able to distinguish a computer from a human being.

Start-up companies are now developing computer software that continuously learns from the data that is inputted.  Computers can thereby draw increasingly informed conclusions and inferences.  Computers using advanced AI predictive analytical techniques can project and predict likely outcomes with a high degree of probability.

Further progress toward more highly functioning or intelligent computers will be driven by increasingly specialized computer chips (or whatever hardware comes next) along with related software.  Chips and related electronic devices are now so small (nanotechnology and smaller) and soon will be operating at an atomic scale or perhaps even eventually at quark scale. Cost will fall ever closer to free. AI-based software will become so intelligent that most anything that runs it will in turn be made more intelligent.  This could include a range from driverless cars, to self-piloting planes, to instant, accurate medical diagnostic and procedural devices.

How are human and Artificial Intelligence different? How are they similar?

The human brain doesn’t likely work in many ways like a computer. Advanced AI programing, on the other hand, is using the human brain as a model and creating computer generated “neural networks” that resemble the architecture of the human brain. These AI-enabled computers are sending electronic signals throughout a cascading and ascending network of connections that both learns and thereby provides increasing understanding.

A computer with brain-like function is approaching the horizon, likely to emerge within the next decade or two.  Through increased artificial intelligence, computers will appear to “think” like we do, and will likely even out-think us in many areas. They will evolve from today’s powerful but conceptually straightforward big data crunching to more complex and connected information processing resembling human intelligence.

Computer software continues to evolve toward greater complexity and connectivity. This will enable the hardware to perform more and increasingly complex tasks more efficiently and effectively.  Essentially the software and the hardware will operate increasingly in a feedback loop, advancing on their own, through continuous learning, in an ongoing chain of being and becoming.

As computers and AI software further develop, the possibilities are exciting. The soon-to-be quantum computer will not only provide vastly improved processing, but through artificial intelligence it should also be able to quickly improve its own software and hardware and even build its own offspring through 3D and 4D printing and manufacturing.

We humans demonstrate our intelligence mostly to other humans.  We often think of ourselves based on others’ reactions and reflections.  If others say that we are intelligent, then we are likely to feel that we are intelligent. If the feedback is not so positive, then we may reach the opposite conclusion.  If we are successful economically, artistically, culturally, or in some other way valued by society, then we are likely to conclude that we are intelligent even if others perhaps do not agree.  Using Evans’ factors, we will likely conclude that we are intelligent if we successfully adapt to an ever changing world and are good at surviving and prospering.

Computers, on the other hand, have until now been static machines dependent on humans to “survive” for continued useful existence.  They have not had to adapt to a changing world.  Their adaptation has depended upon human data input, programming, and perhaps incorporation of additional peripheral hardware.

In the emerging era of AI, computers imbued with Artificial Intelligence will be expected to learn more and more, faster and faster, to justify their continued use. Otherwise they will be replaced with the next generation machine.

A major mission for AI will be to teach (program) computers to learn even more than their data input and programing alone might be expected to permit.  Like humans, computers will learn through experience (processing and feedback loops), study (ever increasing data input) and being taught (continuous learning). This has already begun to happen.

Yet AI intelligence is still quite different from human intelligence. Ron Brachman, former president of the Association for Advancement of Artificial Intelligence, describes human intelligence as “a many splendored thing.”  It incorporates versatility, robustness, common sense, flexibility, adaptation and continuous learning from the world around it.

AI developments, on the other hand, have splintered into a number of sub- sections and sub-specialties. This has limited the advance to fully functioning artificial intelligence that unites the various components into a fully functioning unified intelligence.

Brachman argues that, without such fully functioning unified intelligence, the various sub-systems are less able to adapt to change, limiting potential intelligence. Sub- systems are able to perform interesting and useful feats, but their range is limited due to that limited adaptability.

The current state of AI

Even as the world waits for computers whose specialized sub-systems are unified into a coherent whole working as one, interesting and useful AI products and services based on the implementation of sub-systems are already having a major economic and societal impact.  These early successful products and services generally are based on large collections of data (big data) and finding patterns in the data that were not previously apparent, resulting in personal and business transformation.

We have come to take some of these applications for granted and don’t think about them consciously as artificial intelligence.

  • When we begin to type in a person’s name or email address into an email, the system recognizes those with whom we have had much prior email correspondence and types in their complete email address.
  • When we type texts on our smartphones, the phones correct our typos and anticipate what we (may or may not have) meant.

More substantive examples include:

  • When the credit card company detects transactions inconsistent with our prior behavior, their system flags those off-pattern transactions and may reject them or alert the card holder of possible fraud.
  • When a retailer like Walmart forecasts order quantities or plans for optimal use of refrigerator and freezer space, they rely on cumulative past experience to optimize their decisions.

Beyond these common uses of artificial intelligence, there are many that are more exciting:

  • Drug discovery by biotech and big pharma
  • Insurance companies providing a more personalized customer experience
  • Platform companies processing billions of content pieces daily
  • Retailers using virtual vision scanning and AI to identify for fashion-motivated consumers clothing similar to what they or others are wearing, enabling access to a range of preferred choices
  • Applying AI to visual arts tasks, such as capturing the style of a given painting and applying it aesthetically to a photograph
  • Improved language translation
  • Predicting the value of marketing initiatives by ranking decision factors based on an analysis of prior “cluster” patterns
  • Predicting gene functioning relationships or health complications from health records data
  • Attorneys’ analysis of a body of contracts, securities, and other documents as well as more accurate reading of fact patterns and related case law abstracts

AI applications seemingly are becoming as numerous as the tasks at hand, increasingly applied in specialized areas and endeavors.

Most of today’s AI applications program computers to learn from experience, somewhat like humans learn.  Machine learning is applied to very large data sets. Machine-learning algorithms detect patterns. After detecting those patterns, new data is then processed to make predictions and recommendations. As input data and experiences accumulate, the algorithms adapt to become smarter/more effective over time.

Artificial General Intelligence (A.G.I.) versus Artificial Narrow Intelligence (A.N.I.)

Discussion about AI often leads to creative speculation about how far technology may lead us. Minds wander to what is called Artificial General Intelligence (A.G.I.), the sort of intelligence that could preempt human intelligence and threaten (or benefit in unimaginably grand ways) humankind and life as we know it.

A recent article by Tad Friend in The New Yorker, “How Frightened Should We Be of A.I.?” discussed this at length. According to Friend, “Bill Gates and Tim Berners-Lee, the founder of the World Wide Web, recognize the promise of an A.G.I., a wish-granting genie rubbed up from our dreams, yet each has voiced grave concerns. Elon Musk warns against ‘summoning the demon,’ envisaging ‘an immortal dictator from which we can never escape.’ Stephen Hawking declared that an A.G.I. ‘could spell the end of the human race’.”

Such alarmism isn’t new. It goes back to the birth of AI in the 1950’s. We urge that you not lose sleep over these concerns. Of course, if they ever come to pass, there’s not much we may be able to do about them!

Where AI is evolving

Numerous more advanced applications are in various stages of development and/or commercialization. Many have been inspired by the rich complexity of the human brain as a model.

It is thought that human brains have billions of neurons and trillions of connections between these neurons which send electrical and chemical signals. Those signals, characterized by varied signal strengths, trigger outputs such as physical actions, thoughts, or perhaps even consciousness.  The brain is thought to work as a unified “field” in which the whole brain is much more than the sum of its parts.

Taking the human brain as a possible model, computer scientists have developed a computer learning approach called “deep learning.”  In “An Executive’s Guide to AI,” consulting leader McKinsey describes deep learning as a type of machine learning that can process a wider range of data sources, require less data preprocessing by humans, and can often produce more accurate results than traditional machine-learning approaches, although it requires a larger amount of data to do all this.

In deep learning, interconnected layers of “processing units” called “neurons” form a neural network. The multi-layer neural network has much greater data input and processing capacity. Each successive layer uses as input the output from the previous layer’s processing units. Increasingly complex (or deep) learning is thereby achieved at each successive layer.

According to Wikipedia, those layers learn in supervised (e.g. classification-aided) and/or unsupervised (e. g. pattern analysis) ways. The multiple levels of output representations correspond to different levels of abstraction, forming a hierarchy of concepts.  The more layers in the hierarchy and the greater the ability to propagate through a layer more than once, the better the learning.

Our Perspective as Venture Capitalists and Investors

While it has been useful in the development of AI to use the human brain as a model, we don’t really understand much about how the brain actually works. AI in a computer may prove to be a process completely different from how a human brain functions.

Even if we eventually understand the human brain more completely, that may not matter much for artificial intelligence. As long as an AI robot or computer can perform a specific task and continue to learn to perform that task and/or other tasks more effectively, that’s what will really matter.

We recommend leaving concerns with A.G.I. (presumably most like the human brain model) to the biggest IT leaders, such as Microsoft, Alphabet, Amazon, and Apple. A Wall Street Journal report just pegged Apple annual R&D spending at $14 billion, and that spend was the lowest of the named juggernauts. Startup ventures won’t be able to compete with their massive R&D investment capabilities.

We believe the greatest opportunities for venture capitalists like us and individual investors will be in A.N.I., focused is on more specific applications. The biggest IT leaders can’t chase all of those opportunities or scramble from task area to task area trying to preempt these narrower developments. It will make more sense for them to acquire, license, or partner with such task-specific ventures.

AI need not possess human-like consciousness to be effective. Not needing consciousness should lop decades off of the development of commercially attractive artificial intelligence applications.

New applications just now reaching commercial reality 

The absence of human-like consciousness and a focus on narrower applications does not mean that those specific tasks will be simple or simplistic. Deep learning will open the door to increasingly complex neural networks and sophisticated applications.

An example is a Chicago-based company called simMachines, whose artificial intelligence focuses on similarities and clusters of like data to make more accurate predictions. Its product not only produces more accurate predictions, and faster than AI systems already in place, but also provides transparency to the various factors driving the predictions. It essentially answers for the first time the question “why.”

This company’s investor attractiveness is enhanced by its strategic focus on specific industry applications. Its two initial areas of focus are credit card fraud detection/prevention and marketing effectiveness optimization.

In the credit card fraud prevention market, simMachines’ product has proven to be much faster than current products. This allows companies to make more informed, faster decisions regarding the creditworthiness and legitimacy of potential customers.

In the field of marketing effectiveness, the company goes beyond existing statistical modeling techniques using large amounts of static data. simMachines’ continuous learning enables development of dynamic and hence increasingly accurate market segmentation, generating marketing outcome predictions in seconds, resulting in more precise campaigns at a lower cost.

Other companies are focused on the application of advanced set theory mathematics along with complex decision rules to create continuous learning.  The set theory approach enables understanding words (textual data) in their context with other words. Said another way, the technology enables a computer to understand the written word like a human does.

This is needed for reliable understanding and intelligence, as the English language includes about 56,000 words having multiple meanings.  Unless the meaning can be understood in its usage context, all that will exist is a Tower of Babel.  This approach requires sufficiently massive data be inputted for the computer to continue to learn to process the textual data in context.

While the biggest IT leaders are pursuing R&D in this area, we believe the way for venture capital investment to succeed in this area is through focus on specific industry applications, where industry knowledge (and relevant marketing and sales capabilities) could be key. Areas whose opportunity would seem particularly ripe include strategic government intelligence and business intelligence, where massive amounts of textual data must be scoured for valuable insight and action.

Coming soon – advanced robotics

While rudimentary robotics are already a commercial reality, more advanced robotics is an AI application field with particularly great promise. Robotics is an amalgam of mechanical engineering, electrical engineering, and computer science. The end product includes the design, construction, and application of robots and the associated computer systems that control and direct their operations.

The idea of robots has been with us for a long time. Early applications, however, have been limited and haven’t contributed much so far to social development beyond robotic vacuum cleaners, machines handling narrow tasks in automotive manufacture, and other similarly simple-task devices.

Incorporation into robotics of advanced vision systems, machine learning (accelerated by virtual reality simulating the real world for the robots), and big data/predictive analytics will soon result in complex artificial intelligence that will have a dramatic impact on robotics potential.

A brave new world of highly functioning robots may become reality in the near future, potentially eliminating the needs for much of today’s hard human labor as well as soft labor in repetitive motion tasks. These robotic advances may free humanity from exhausting and/or boring labor, contributing ultimately to a world of greater abundance.

Looking further into the future, micro-robots constructed of nanomaterials might be injected into the human body for surgical and other medical purposes.  Robots may eventually construct and operate the factories of the future with minimal human intervention. 3D and 4D printing incorporated into robotic printers may someday produce many of the objects needed for the good life.

All this may become possible because the multiple elements of truly advanced robotics technology are just now converging.  For example, advanced robotics requires advanced vision systems, as it is nearly impossible to do work without manipulating the tools of the work, and this capability is much less robust without the ability to see. Machine vision companies are now using AI techniques, along with advanced materials and methods as well as knowledge about how humans see, to create artificial vision systems for robots.  While currently crude, much more sophisticated systems are expected soon, as more is understood about the meaning of how to “see.” This will enable for inanimate objects like robots the accurate perception of the physical environment and the ability to navigate that environment.

Human vision, as we know it, requires immense processing power and memory recall as well as the ability to make an accurate guess about what is perceived when the image is ambiguous.  Accurate machine vision systems will require all of this and more. This will be enabled by AI, including the capacity to continuously learn from experience.

Human vision itself is also being augmented by AI-enabled virtual reality that generates realistic images, sounds, and other sensations replicating a real environment (or creating an imaginary setting) and simulating a user’s physical presence in this environment. Robotic engineers will soon be able to use AI-enhanced virtual reality to simulate for a robot a virtual world that the robot will see as the real world.  The robot will then capitalize on continuous learning, applying lessons learned in the virtual world to the real world, creating a continuous feedback loop for ongoing effectiveness improvement.

Future AI impact on human biology

AI may have a major impact not only on robotic evolution but on further human evolution as well.  Just as we regard the machine that sits on our desk as a computer, so we can regard the human genome as a computer, subject to many of the same advances when AI is applied to human biology.

The human genome, the complete set of genes housed in 23 pairs of chromosomes, is both the autobiography and the future of our species. Since DNA is essentially a biological computer code, AI may soon move into the human domain just as it has in the computer and robotics domains.

As our understanding of the genome and the ordering and functioning of its genetic code continues, AI research will focus on directing, controlling and increasing the power of our biological computing systems.  This will include applying predictive analytics to find drugs most likely to deliver personalized benefits, and may ultimately “teach” our DNA how to modify itself in beneficial ways.

Looking even further out

Soon computers will capitalize even further on continuous learning, as data relevant to any problem or task pursued by AI is inputted directly from the Internet and output is delivered on demand as if through a fire hose.  This may lead to the so-called “Singularity,” when computers equal man in apparent intelligence. The path to this Singularity is being put in place now by worldwide investment in AI R&D by governments, universities, and corporate organizations.  The experts believe it will likely arrive by 2050.

Stanford University’s inaugural AI index offered these observations and projections:

  • There has been a 14X increase in the number of active AI startups since 2000.
  • Venture capital investment in AI startups has increased 6X since 2000.
  • The share of jobs requiring AI skills has grown 4-5X since 2013.
  • Global sales of robots have risen from approximately 100,000 in 2000 to around 250,000 in 2015.
  • Global revenues from AI enterprise applications are projected to grow from $1.6B in 2018 to $32B in 2025, a 53% compound annual growth rate.

The elements expected to enable so-called Singularity by 2050 are: (1) faster computer processing speed; (2) much greater data storage; (3) computer ability for continuous learning; and (4) a connection to all available data/information and ability to search for and select data relevant to the task at hand.

Faster computer processing speeds are likely through the development of quantum computers, which are expected eventually to be able process all the world’s data almost instantaneously. An initial fully functioning quantum computer is expected within about a decade.

There are five or so major recognized approaches to developing a quantum computer. Work is proceeding rapidly by companies big and small.  The number of qubits or quantum computer processing cells that can work together is now approaching take-off.

Artificial DNA is a candidate for storing vast amounts of data at much lower cost. DNA is already being utilized to store and retrieve information. The cost of DNA storage is the main hurdle for implementation. This hurdle is expected to be surmounted within a decade, so that all the world’s data could eventually be stored in a space about the size of a case of wine.

Computer software is already under development that will enable computers to continuously learn based on deep mathematical formulations that will permit computer understanding of inference and context.  It is anticipated that such continuously learning computers will arrive within the next five years.

The next generation Internet — the fire hose carrying all information and data to all intelligent computers – is under development.

Singularity is expected to mean that a computer will in many ways appear to have human capabilities. As discussed earlier, while some experts have sounded an alarm about what all this might mean, we are hopeful that the result will be a better world with better lives for everyone.

What all this means for the Intelligent Investor

The developments in AI and related fields that we have discussed will provide numerous investment opportunities for the Intelligent Investor over the next decade and beyond. The Stanford AI Index reports 84% of enterprises believe investing in AI will lead to greater competitive advantages.  75% believe that AI will open up new businesses and provide new ways to gain access to markets.  63% believe the pressure to reduce costs will require the use of AI  (Source: Statista).

Leading corporations such as Alphabet, Amazon, Apple, Facebook and others are making major investments in AI platforms likely to dominate overall AI technology. We therefore believe the Intelligent Investor will be best served by investing in selective, targeted AI-enabled products and services focused on well-defined market segments.

The previously cited simMachines venture is a good example, focused on credit card fraud prevention and marketing plan/customer segmentation optimization.  We expect that AI-based ventures that secure a beach head in a targeted industry segment should be attractive acquisition, licensing, or partner targets for larger companies. Such AI access may be essential for existing enterprise survival.

Some concluding introspection

Just like with other world-changing developments – e.g. the birth of the automobile, mass production, the rise of the Internet — AI development will eliminate many existing industries, companies, and jobs while creating new opportunities.  It will likely have profound impact on the venture capital industry and venture capital investors as well.

Venture capitalists perform several tasks:

  • They hunt for new high risk/high reward investment opportunities.
  • They search for information to help determine if those investment opportunities are likely to be successful, often researching competitive technologies as well as looking at similar companies that are likely to compete.
  • They further assess potential returns through exit models based on comparable ventures as well as considering background on management and other relevant information.

Several venture capital firms are already utilizing algorithms that crunch large amounts of data in order to make investment selections based on AI-generated predictions.  This approach is currently best suited for situations where there is more information available, generally later stage investments.

It is likely, though, that increasingly sophisticated AI applications will be developed which can successfully predict success for early stage investments too. This will likely entail more methodical analysis of products, business models, and founder and management team quality, all aided by efficient processing of massive amounts of data.

A characteristic that some believe is key to successful venture capital investing is “looking as far as the eye can see, and knowing all the wonder that will be.” Somewhere in the future, our AI-enabled computers, while perhaps not wondering like humans do, will undoubtedly look further and see more accurately.

*VCapital is currently raising funds for simMachines, a company developing proprietary artificial intelligence (AI) /machine learning (ML) solutions for marketing and fraud in large enterprises.