Model-based AI technology that’s personalized for your unique business
While our competitors use a selection of legacy database-driven technologies, only Brighterion’s Smart Agents bring a powerful, distributed file system specifically designed to store knowledge and behaviors. This distributed architecture allows lightning speed response times (less than 10 milliseconds) on entry-level servers as well as end-to-end encryption and traceability. The distributed architecture allows for unlimited scalability and resilience to disruption as it has no single point of failure. Our customers benefit from 99.9999% uptime.
How are we different?
Based on decades of international experience, we have developed an unrivalled, unsupervised artificial intelligence and machine learning platform. Brighterion, powered by Smart Agents:
- Automatically updates each profile in real time
- Reliably identifies anomalous behavior through continuous adaptive learning
- Requires no database or data warehouse
- Provides unlimited scalability and no disruption – no single point of failure
- Builds production-ready models in 6-8 weeks, rather than months or years
Supports any type of data, any source, any format
Brighterion’s Smart Agents provide robust virtual profiles from all data sources and multiple channels regardless of type, complexity and volume. Seamlessly accepting data from myriad sources, our platform can understand the full behavior of any entity or individual. Once the data is received, Smart Agents enrich the data, saving our customers hours of tedious work and allowing them to focus on their businesses.
Segment of one
Understand your users’ behaviors through continuous observation at an individual level. With real-time updates, the platform enables decision making specific to each profile, and specific to its unique behavior. This one-to-one behavioral profiling provides unprecedented, omni-channel visibility to identify fraud and security breaches as they happen.
Brighterion’s unsupervised learning is achieved with our patented Smart Agents technology. Smart Agents is the only technology with the ability to overcome the limits of legacy machine learning through personalization, adaptability and self-learning to enables discovery, identification and mitigation of anomalous activities.
The tools of Smart Agent technology
Smart Agents is the only technology with the ability to overcome the limits of the legacy machine learning technologies to allow personalization, adaptability and self-learning. Smart Agents do not rely on pre-programmed rules and do not try to anticipate every possible scenario. Instead, Smart Agents create profiles specific to each entity and behave according to their goals, observations, and the knowledge they continuously acquire through their interactions with other Smart Agents. Each Smart Agent pulls all relevant data across multiple channels, irrespective of the type or format and source of the data, to produce robust virtual profiles. Each profile is automatically updated in real time and the resulting intelligence is shared across the Smart Agents. To learn more, see our blog post: Next Generation artificial intelligence and machine learning.
Smart Agent benefits
- Continually evolving models with significantly lower operational costs
- Dramatically decreased false positives and higher detection rates
- Cross-channel anomaly detection and prediction ability
- Low-risk behavior identification
- Improved overall model performance in the face of changing threats and legitimate customer behavior patterns
Brighterion differentiator: proprietary distributed AI storage
While our competitors use a selection of legacy database-driven technologies, only Brighterion’s Smart Agents bring a powerful, distributed file system specifically designed to store knowledge and behaviors. This distributed architecture allows lightning speed response times (below 10 milliseconds) as well as end-to-end encryption and traceability. The distributed architecture allows for unlimited scalability and resilience to disruption as it has no single point of failure. We guarantee 99.9999% uptime.
Compression 100 X
Distributed no single
point of failure
Brighterion named “Most Scalable Platform” in Aite Group report
The tools of Smart Agent technology explained
Real-time, long-term profiling
Brighterion’s profiling solution creates hundreds of new features on the fly that are used for scoring. The features generated are derived fields such as grouping, mappings, sets, expressions and more, creating real-time, long-term profiles that track entity behavior.
Brighterion’s profiling uses proprietary algorithms that are database independent and can profile any entity in real time: merchant, agreement, outlet, terminal, merchant segmentations and others. It’ll allow you to monitor a wide variety of merchant data such as production purchasing patterns, suspicious changes in activities, number of transactions over a window of time, entity transaction frequency, comparison of transaction versus authorization to detect anomalies, and trends in the number of chargebacks over time. There is no limit to the number of profiling criteria that can be defined.
- Real-time profiling – The time window for aggregation can be anywhere from several seconds to several weeks. The counters are updated in real time as transactions are processed
- Long-term profiling – Profile the same entities over a long time period, from a few months to several years. Long-term profiling is used to establish the baseline behaviors for entities, such as users, IP addresses, and devices. At any time, a window can be defined for aggregation and the user can specify the refresh rate to automatically update the profiling values
- Recursive profiling – Gain a full view of user behavior by being able to track and monitor the normal behavior of an entity. An example would be to compute the maximum number of times a user logs on to online banking in a week
- Geo-location profiling – Compute the distance in real time between two zip codes, IP addresses, or other geo-location data to detect abnormal behaviors
- Multidimensional profiling – Profile multiple entity interactions to link suspicious behaviors together and identify unknown entity links
- Peer comparison profiling – Compare entities, such as merchants, to their peers in real time to detect any suspicious activity
Traditional logic typically categorizes information into binary patterns such as, black/white, yes/no, or true/false. Fuzzy logic brings a middle ground where statements can be partially true and partially false to account for much of day-to-day human reasoning. For example, stating that a tall person is over 6′ 2″, traditionally means that people under 6′ 2″ are not tall. If a person is nearly 6′ 2″, then common sense says the person is also somewhat tall. Boolean logic states a person is either tall or short and allows no middle ground, while fuzzy logic allows different interpretations for varying degrees of height.
Neural networks, data mining, case-based reasoning (CBR), and business rules can benefit from fuzzy logic. For example, fuzzy logic can be used in CBR to automatically cluster information into categories which improve performance by decreasing sensitivity to noise and outliers. Fuzzy logic also allows business rule experts to write more powerful rules. To learn more, please read our blog post: A closer look at AI: fuzzy logic.
Constraint programming is a powerful paradigm for solving combinatorial search problems that draws on a wide range of techniques from computer science, AI, databases, and operations research. Brighterion’s constraint programming technology relieves programmers of the burden of learning a new language.
Genetic algorithms (GA) work by simulating the logic of Darwinian selection where only the best performers of a species are selected for reproduction. Over many generations, natural populations evolve according to the principles of natural selection. Those individuals most suited to the environment are more likely to survive and generate offspring, thus transmitting their biological heredity to new generations. A genetic algorithm can be thought of as a population of individuals represented by chromosomes.
In computing terms, a genetic algorithm implements the model of computation by having arrays of bits or characters (binary string) to represent the chromosomes. Each string represents a potential solution. The genetic algorithm then manipulates the most promising chromosomes searching for improved solutions. A genetic algorithm operates through a cycle of three stages:
- Build and maintain a population of solutions to a problem
- Choose the better solutions for recombination with each other
- Use their offspring to replace poorer solutions
Genetic algorithms provide various benefits to existing machine learning technologies such as being able to be used by data mining for the field/attribute selection, and can be combined with neural networks to determine optimal weights and architecture. To learn more, please read our blog post: A closer look at AI: genetic algorithms.
Unsupervised learning is learning from unlabeled data, where particularly informative privileged variables or labels do not exist. As a result, the greatest challenge is often to differentiate between what is relevant and what is irrelevant in any particular dataset. In the context of classification, the goal is to divide a set of unlabeled data into classes, or clusters. Unsupervised learning also encompasses dimensionality reduction, feature selection, and a number of latent variable models.
Case-based reasoning (CBR) is a problem-solving paradigm that differs from other major AI approaches. A CBR system can be used in risk monitoring, financial markets, defense, and marketing just to name a few. CBR learns from past experiences to solve new problems. Rather than relying on a domain expert to write the rules or make associations along generalized relationships between problem descriptors and conclusions, a CBR system learns from previous experience in the same way a physician learns from his patients.
A CBR system will create generic cases based on the diagnosis and treatment of previous patients to determine the disease and treatment for a new patient. CBR systems can be built without the need for extracting knowledge from experts, which is difficult and requires time and expertise. The implementation of a CBR system consists of identifying relevant case features and continually learns from each new situation. Generalized cases can provide explanations that are richer than explanations generated by chains of rules. To learn more, please read our blog post: A closer look at AI: case-based reasoning.
Deep neural networks learn hierarchical layers of representation from the input to perform pattern recognition. When the problem exhibits non-linear properties, deep networks are computationally more attractive than classical neural networks. A deep network can be viewed as a program in which the functions computed by the lower-layered neurons are subroutines. These subroutines are reused many times in the computation of the ﬁnal program. To learn more, please read our blog post: A closer look at AI: neural networks and deep learning.
A neural network (NN) is a computer program inspired by the structure of the brain. A neural network consists of many simple elements called artificial neurons, each producing a sequence of activations. The elements used in a neural network are far simpler than biological neurons; the number of elements and their interconnections are orders of magnitude fewer than the number of neurons and synapses in the human brain.
Backpropagation (BP) is the most popular supervised neural network learning algorithm. BP is organized into layers and connections between the layers. The goal of BP is to compute the gradient (a vector of partial derivatives) of an objective function with respect to the neural network parameters. Input neurons activate through sensors perceiving the environment and other neurons activate through weighted connections from previously active neurons. Each element receives numeric inputs and transforms this input data by calculating a weighted sum over the inputs. A non-linear function is then applied to this transformation to calculate an intermediate state. To learn more, please read our blog post: A closer look at AI: neural networks and deep learning.
Data mining, or knowledge discovery in databases, is the nontrivial extraction of implicit, previously unknown and potentially useful information from data. Statistical methods are used to identify trends and other relationships in large databases.
Data mining has attracted attention because of the wide availability of vast amounts of data, and the need for turning such data into useful information and knowledge. The knowledge gained can be used for applications ranging from risk monitoring, business management, production control, market analysis, engineering, and science exploration. To learn more, please read our blog post: A closer look at AI: data mining.
A business rule management system (BRMS) enables companies to easily define, deploy, monitor, and maintain new regulations, procedures, policies, market opportunities, and workflows. One of the main advantages of business rules is that they can be written by business analysts without the need of IT resources. Rules can be stored in a central repository and can be accessed across the enterprise. Rules can be specific to a context, a geographic region, a customer, or a process. Advanced BRMS offers role-based management authority, testing, simulation, and reporting to ensure that rules are updated and deployed accurately. To learn more, please read our blog post: A closer look at AI: business rules management system.