Backpropagation (BP) is the most popular supervised neural network learning algorithm. BP is organized into layers and connections between the layers. The leftmost layer is called the input layer. The rightmost, or output, layer contains the output neurons. Finally, the middle layers are called hidden layers. While the design of the input and output layers of a neural network is straightforward, there is an art to the design of the hidden layers. Designing and training a neural network requires choosing the number and types of nodes, layers, learning rates, training data, and test sets.
Recently deep learning, a new term that describes a set of algorithms that use a neural network as an underlying architecture, has generated many headlines. Deep neural networks learn hierarchical layers of representation from the input to perform pattern recognition. When the problem exhibits non-linear properties, deep networks are computationally more attractive than classical neural networks. A deep network can be viewed as a program in which the functions computed by the lower-layered neurons are subroutines. These subroutines are reused many times in the computation of the ﬁnal program.
Although deep learning garners much attention, people fail to realize that deep learning has inherent restrictions which limit its application and effectiveness in many industries and fields as it requires human expertise and significant time to design and train.
Data mining, or knowledge discovery in databases, is the nontrivial extraction of implicit, previously unknown and potentially useful information from data. Statistical methods are used that enable trends and other relationships to be identified in large databases.
Data mining has attracted attention is because of the wide availability of vast amounts of data, and the need for turning such data into useful information and knowledge. The knowledge gained can be used for applications ranging from risk monitoring, business management, production control, market analysis, engineering, and science exploration.
Case -based reasoning
Case-based reasoning (CBR) is a problem solving paradigm that is different from other major A.I. approaches. CBR learns from past experiences to solve new problems. Rather than relying on a domain expert to write the rules or make associations along generalized relationships between problem descriptors and conclusions, a CBR system learns from previous experience in the same way a physician learns from his patients. A CBR system will create generic cases based on the diagnosis and treatment of previous patients to determine the disease and treatment for a new patient. The implementation of a CBR system consists of identifying relevant case features. A CBR system continually learns from each new situation. Generalized cases can provide explanations that are richer than explanations generated by chains of rules. The most important limitations relate to how cases are efficiently represented, how indexes are created and how individual cases are generalized.
Genetic algorithms work by simulating the logic of Darwinian selection where only the best performers are selected for reproduction. Over many generations, natural populations evolve according to the principles of natural selection. A genetic algorithm can be thought of as a population of individuals represented by chromosomes. In computing terms, a genetic algorithm implements the model of computation by having arrays of bits or characters (binary string) to represent the chromosomes. Each string represents a potential solution. The genetic algorithm then manipulates the most promising chromosomes searching for improved solutions. A genetic algorithm operates through a cycle of three stages:
- Build and maintain a population of solutions to a problem
- Choose the better solutions for recombination with each other
- Use their offspring to replace poorer solutions.
Genetic algorithms provide various benefits to existing machine learning technologies such as being able to be used by data mining for the field/attribute selection, and can be combined with neural networks to determine optimal weights and architecture.
Next generation, artificial intelligence and machine learning
We’ve seen that current A.I. and machine learning technologies suffer from various limits. Most importantly, they lack the capacity for:
Personalization: To successfully protect and serve customers, employees, and audiences we must know them by their unique and individual behavior over time and not by static, generic categorization.
Adaptability: Relying on models based only on historical data or expert rules are inefficient as new trends and behaviors arise daily.
Self-learning: An intelligent system should learn overtime from every activity associated to each specific entity.
To illustrate the limits, consider the challenges of two important business fields: network security and fraud prevention. Fraud and intrusion are perpetually changing and never remain static. Fraudsters and hackers are criminals who continuously adjust and adapt their techniques. Controlling fraud and intrusion within a network environment requires a dynamic and continuously evolving process. A static set of rules or a machine learning model developed by learning from historical data have only short-term value.
In network security, dozens of new malware programs with ever more sophisticated methods of embedding and disguising themselves appear on the internet every day. In most cases after vulnerabilities are discovered, a patch is released to address the vulnerability. The problem is it is often easy for hackers to reverse engineer the patch and therefore another defect is found and exploited within hours of the release of the given patch. The Aurora attack, which originated in China in the Fall of 2009, against Google and several other companies was an example of exploitable dangling pointers in a Microsoft browser that previously had not been discovered.
Tools that autonomously detect new attacks against specific targets, networks or individual computers are needed. They must be able to change its parameters to thrive in new environments, learn from each individual activity, respond to various situations in different ways, and track and adapt to the specific situation/behavior of every entity of interest over time. This continuous, one-to-one behavioral analysis, provides real-time actionable insights.
Smart Agents technology
Smart Agents provides the only AI tool with the ability to overcome the limits of the legacy machine learning technologies allowing personalization, adaptability and self-learning.
Smart Agents technology is a personalization technology that creates a virtual representation of every entity and learns/builds a profile from the entity’s actions and activities. In the payment industry, for example, a Smart Agent is associated with each individual cardholder, merchant, or terminal. The Smart Agent associated to an entity (such as a card or merchant) learns in real-time from every transaction made and builds their specific and unique behaviors over time. There are as many Smart Agents as active entities in the system. For example, if there are 200 million cards transacting, there will be 200 million Smart Agents instantiated to analyze and learn the behavior of each. Decision-making is specific to each cardholder and no longer relies on logic that is universally applied to all cardholders, regardless of their individual characteristics. The Smart Agents are self-learning and adaptive since they continuously update their individual profiles from each activity and action performed by the entity. Each Smart Agent pulls all relevant data across multiple channels, irrespective of the type or format and source of the data, to produce virtual profiles.
So, for example, in a ﬁnancial portfolio management system, a multi-agent system consist of Smart Agents that cooperatively monitor and track stock quotes, ﬁnancial news, and company earnings reports to continuously monitor and make suggestions to the portfolio manager.
Each profile is automatically updated in real-time and the resulting intelligence is shared across the Smart Agents. This one-to-one behavioral profiling provides unprecedented, omni-channel visibility into the behavior of an entity.
A comprehensive intelligent solution must combine the benefits of existing artificial intelligence and machine learning techniques with the unique capabilities of Smart Agents technology. The result is a comprehensive solution that is intelligent, self-learning and adaptive.
For more detail, please download our white paper, Artificial Intelligence and Machine Learning: The Next Generation.