News & Blogs




blog grid image

CASINOS & EVOLVING ANA...

Analytics are used by companies to be more competitive and the financial services industry has known this for decades.  In fact, many financial services analytics professionals are moving to gaming as both industries need to balance risks and returns. More and more, casinos are using analytics to make decisions in areas that have traditionally relied upon “expertise” rather than data-driven approaches to increase profits…

    Where to strategically place games on the casino floor

              

Today, modeling teams at a number of casinos use software such as SAS to predict the impact of moving games from one area of a casino floor to another1.  

To set a baseline, data is collected on how much money each game, whether table games or slots, currently brings in as well as how people move about the casino. When the gathered data is combined with the odds of a particular game paying out, the analytics team can model what the performance would look like in different locations to help determine where the game should be placed in order to achieve the optimal performance level.  This is a similar technique used by supermarket companies. Just as with a grocery store, where on the casino floor would you get the best yield?

       A holistic data-driven approach for all casino operations

           

Gaming revenue is not the largest portion of what casinos bring in. They derive much of their revenue from their resort operations. For example, a good way to encourage gambling is to give customers free nights or discounted dinners in the hotel that houses a casino. But the casino would lose money if it did so for everyone, because some people don’t gamble much. To help pinpoint such offers, savvy Casinos run customer analytics applications on data it has collected showing how often individual guests gamble, how much money they tend to spend in the casino and what kinds of games they like. This is all part of a significant shift in how casinos do business where it’s getting to the point that casinos are being run like financial services firms.

       The challenges of shifting to Big Data

       
   

At MGM Resorts’ 15 casinos across the United States, thousands of visitors are banging away at 25,000 slot machines. Those visitors rang up nearly 28 percent of the company’s $6 billion in annual domestic revenue in 2013. Using the game and customer data that MGM collects daily and the behind-the-scenes software that transforms the data into critical insights, in turn, boost the customer experience and profit margins2.

Lon O’Donnell, MGM’s first-ever director of corporate slot analytics, is challenged to show why big data is a big deal when it comes to plotting MGM’s growth. “Our goal is to make that data more digestible and easier to filter,” says O’Donnell, who estimates that Excel still handles an incredible 80 percent of the company’s workload. In the near term, that means the team is experimenting with data visualization tools (Slotfocus dashboard - right) to make slot data more crunchable. Heavy-lifting analytics are a goal down the road3.  MGM isn’t the only gaming company interested in big data - nor was it the first. That distinction goes to Gary Loveman, who left teaching at Harvard Business School for Las Vegas in the late 1990s and turned Harrah’s into gaming’s first technology-centric player.

History has caught up with the industry. For decades, Las Vegas casinos were some of the only legal gambling outfits in the country, so they could afford to be complacent. That advantage disappeared during the past two decades with the rise of legal gambling in 48 states. The switch to slicker, more sophisticated cloud apps is still on the horizon. One reason why is the regulatory nature of gaming: Casinos tend to organize data in spreadsheets to report to regulators, who review the accounting and verify that slots perform within legal specifications. But those reports are not ideal business intelligence sources.

       Using Big Data to catch cheaters

       

Casinos are at the forefront of new tools to help them make more money and reduce what they consider to be fraud. One tool is something called non-obvious relationship awareness (NORA) software that allows casinos to determine quickly if a potentially colluding player and dealer have ever shared a phone number or a room at the casino hotel, or lived at the same address4,5. “We created the software for the gaming industry,” says Jeff Jonas, founder of Systems Research & Development, which originally designed NORA. The technology has proved so effective that Homeland Security adapted it to sniff out connections between suspected terrorists. “Now it’s used as business intelligence for banks, insurance companies and retailers,” Jonas says.  The image above shows three types of cameras feed the video wall in the Mirage’s surveillance room (top-right):  Fixed-field-of-view units focus on tables; motorized pan-tilt-zoom cameras survey the floor; and 360-degree cams take in an entire area.

Big Data and attendant technologies are starting to transform businesses right before our very eyes. Old ways of doing things are beginning to fall by the wayside. When specific examples like NORA become more public, Big Data suddenly becomes less abstract to those who make decisions.

Analytics are used by companies to be more competitive and the financial services industry has known this for decades.  In fact, many financial services analytics professionals are moving to gaming as both industries need to balance risks and returns. More and more, casinos are using analytics to make decisions in areas that have traditionally relied upon “expertise” rather than data-driven approaches to increase profits…

    Where to strategically place games on the casino floor

       

Today, modeling teams at a number of casinos use software such as SAS to predict the impact of moving games from one area of a casino floor to another1.

To set a baseline, data is collected on how much money each game, whether table games or slots, currently brings in as well as how people move about the casino. When the gathered data is combined with the odds of a particular game paying out, the analytics team can model what the performance would look like in different locations to help determine where the game should be placed in order to achieve the optimal performance level.  This is a similar technique used by supermarket companies. Just as with a grocery store, where on the casino floor would you get the best yield?

       A holistic data-driven approach for all casino operations

       

Gaming revenue is not the largest portion of what casinos bring in. They derive much of their revenue from their resort operations. For example, a good way to encourage gambling is to give customers free nights or discounted dinners in the hotel that houses a casino. But the casino would lose money if it did so for everyone, because some people don’t gamble much. To help pinpoint such offers, savvy Casinos run customer analytics applications on data it has collected showing how often individual guests gamble, how much money they tend to spend in the casino and what kinds of games they like. This is all part of a significant shift in how casinos do business where it’s getting to the point that casinos are being run like financial services firms.

       The challenges of shifting to Big Data

       

At MGM Resorts’ 15 casinos across the United States, thousands of visitors are banging away at 25,000 slot machines. Those visitors rang up nearly 28 percent of the company’s $6 billion in annual domestic revenue in 2013. Using the game and customer data that MGM collects daily and the behind-the-scenes software that transforms the data into critical insights, in turn, boost the customer experience and profit margins2.      

Lon O’Donnell, MGM’s first-ever director of corporate slot analytics, is challenged to show why big data is a big deal when it comes to plotting MGM’s growth. “Our goal is to make that data more digestible and easier to filter,” says O’Donnell, who estimates that Excel still handles an incredible 80 percent of the company’s workload. In the near term, that means the team is experimenting with data visualization tools (Slotfocus dashboard - right) to make slot data more crunchable. Heavy-lifting analytics are a goal down the road3.  MGM isn’t the only gaming company interested in big data - nor was it the first. That distinction goes to Gary Loveman, who left teaching at Harvard Business School for Las Vegas in the late 1990s and turned Harrah’s into gaming’s first technology-centric player.

History has caught up with the industry. For decades, Las Vegas casinos were some of the only legal gambling outfits in the country, so they could afford to be complacent. That advantage disappeared during the past two decades with the rise of legal gambling in 48 states. The switch to slicker, more sophisticated cloud apps is still on the horizon. One reason why is the regulatory nature of gaming: Casinos tend to organize data in spreadsheets to report to regulators, who review the accounting and verify that slots perform within legal specifications. But those reports are not ideal business intelligence sources.

       Using Big Data to catch cheaters

       

Casinos are at the forefront of new tools to help them make more money and reduce what they consider to be fraud. One tool is something called non-obvious relationship awareness (NORA) software that allows casinos to determine quickly if a potentially colluding player and dealer have ever shared a phone number or a room at the casino hotel, or lived at the same address4,5. “We created the software for the gaming industry,” says Jeff Jonas, founder of Systems Research & Development, which originally designed NORA. The technology has proved so effective that Homeland Security adapted it to sniff out connections between suspected terrorists. “Now it’s used as business intelligence for banks, insurance companies and retailers,” Jonas says.  The image above shows three types of cameras feed the video wall in the Mirage’s surveillance room (top-right):  Fixed-field-of-view units focus on tables; motorized pan-tilt-zoom cameras survey the floor; and 360-degree cams take in an entire area.

Big Data and attendant technologies are starting to transform businesses right before our very eyes. Old ways of doing things are beginning to fall by the wayside. When specific examples like NORA become more public, Big Data suddenly becomes less abstract to those who make decisions.

blog grid image

PREDICTIVE POLICING

The National Institute for Justice explains that “predictive policing tries to harness the power of information, geospatial technologies and evidence-based intervention models to reduce crime and improve public safety. This two-pronged approach — applying advanced analytics to various data sets, in conjunction with intervention models — can move law enforcement from reacting to crimes into the realm of predicting what and where something is likely to happen and deploying resources accordingly.”

Today, more and more police departments are using algorithms that predict future crimes. Predictive policing is just one tool in this new, tech-enhanced and data-fortified era of fighting and preventing crime. As the ability to collect, store and analyze data becomes cheaper and easier, law enforcement agencies all over the world are adopting techniques that harness the potential of technology to provide more and better information. But while these new tools have been welcomed by law enforcement agencies, they’re raising concerns about privacy, surveillance and how much power should be given over to computer algorithms1.

The Origins of Predictive Policing

The notion of crime forecasting dates back to 1931, when sociologist Clifford R. Shaw of the University of Chicago and criminologist Henry D. McKay of Chicago’s Institute for Juvenile Research wrote a book exploring the persistence of juvenile crime in specific neighborhoods. Scientists have experimented with using statistical and geospatial analyses to determine crime risk levels ever since. In the 1990s, the National Institute of Justice (NIJ) and others (including the New York Police department) embraced geographic information system tools for mapping crime data, and researchers began using everything from basic regression analysis to cutting-edge mathematical models to forecast when and where the next outbreak might occur. But until recently, the limits of computing power and storage prevented them from using large data sets.

Jeffrey Brantingham is a professor of anthropology at UCLA who helped develop the predictive policing system that is now licensed to dozens of police departments under the brand name PredPol. “This is not Minority Report,” he’s quick to say, referring to the science-fiction story often associated with PredPol’s technique and proprietary algorithm. “Minority Report is about predicting who will commit a crime before they commit it. This is about predicting where and when crime is most likely to occur, not who will commit it.”

Brantingham also emphasized that the algorithm cannot replace police work; it’s intended to help police officers do their jobs better. “Our directive to officers was to ‘get in the box’ and use their training and experience to police what they see,” said Cmdr. Sean Malinowski, the LAPD’s chief of staff. “Flexibility in how to use predictions proved to be popular and has become a key part of how the LAPD deploys predictive policing today2.”

What is PredPol?

Dozens of cities across the US and beyond are using the PredPol software to predict a handful of other crimes, including gang activity, drug crimes and shootings. Police in Atlanta use PredPol to predict robberies. Seattle police are using it to target gun violence. In England, Kent police have used PredPol to predict drug crimes and robberies. In Kent, it’s not just police taking a more proactive approach by concentrating officers in prediction areas, but also civilian public safety volunteers and drug intervention workers.

The prediction algorithm is constantly reacting to crime reports in these cities, and a red box predicting crime can move at any moment. But although officers in the divisions using PredPol are required to spend a certain amount of time in those red boxes every patrol, they’re not just blindly following the orders of the crime map. The officer still has a lot of discretion. It’s not just the algorithm. The officer still has to know the area well enough to know when to adjust and go back into manual mode.

PredPol’s predictive policing is the sum of two parts:

1. Predictive Policing Technology: An algorithm developed from high-level mathematics and sociological and statistical analysis of criminality. This algorithm factors in historical crime data from the police department and produces predictions on where and when a crime is most likely to occur.

2. Insights of officers and crime analysts. According to the National Institute of Justice: “the predictive policing approach does not replace traditional policing. Instead, it enhances existing approaches such as problem-oriented policing, community policing, intelligence-led policing and hot spot policing.”

Predictive policing is more than traditional hotspot mapping. Predictive Policing’s forecasting technology includes high-level mathematics, machine learning, and proven theories of crime behavior, that take a forward-looking approach to crime prevention3.

While PredPol’s predictive boxes predict that a crime will happen in the prediction area, there is no guarantee that an incident or arrest will occur. The presence of police officers in the prediction areas creates a deterrence and suppression effect, thus preventing crime in the first place.

PredPol does not collect, upload, analyze or in any way involve any information about individuals or populations and their characteristics – PredPol’s software technology does not pose any personal privacy or profiling concerns. The algorithm uses only three pieces of data – type, place, and time – of past crimes.

The Chicago Police Department Take Predictive Policing One Step Further

As with PredPol, the approach in predictive policing seeks to forecast where and when crime will happen; another focuses on who will commit crime or become a victim…

The Chicago Police have made it personal. The department is using network analysis to generate a highly controversial Strategic Subject List of people deemed at risk of becoming either victims or perpetrators of violent crimes. Officers and community members then pay visits to people on the list to inform them that they are considered high-risk4.

The Custom Notification program, as it’s called, was inspired in part by studies done by Andrew Papachristos, a sociologist at Yale University. Papachristos grew up in Chicago’s Rogers Park neighborhood in the 1980s and ’90s, at the height of the crack era. When he started studying crime, Papachristos wanted to understand the networks behind it. For a 2014 paper, he and Christopher Wildeman of Cornell University studied a high-crime neighborhood on Chicago’s West Side. They found that 41% of all gun homicide victims in the community of 82,000 belonged to a network of people who had been arrested together, and who comprised a mere 4% of the population—suggesting, with other studies, that much can be learned about crime by examining the company people keep, Papachristos says.

Intrigued by these ideas, the Chicago police teamed up with Miles Wernick, a medical imaging researcher at the Illinois Institute of Technology in Chicago, to develop the Custom Notification program. Because gang violence was distributed across the city, hot spot policing wasn’t as effective in Chicago, says Commander Jonathan Lewin, head of technology for the department. "The geography of the map isn’t as helpful as looking at people and how risky a person is," he says. The hope was that the list would allow police to provide social services to people in danger, while also preventing likely shooters from picking up a gun.

Validations / Concerns

A recent detailed report from the RAND corporation concluded that the Custom Notification program implemented in Chicago saved zero lives — and that overall the list of hundreds of likely shooters generated wasn’t even being used as intended. “There was no practical direction about what to do with individuals on the ‘Strategic Suspect List,’ little executive or administrative attention paid to the pilot, and little to no follow-up with district commanders,” the report concluded. One of its authors pointed out that Chicago’s police department had 11 different anti-violence programs going on, and the list of likely shooters “just got lost.” But it did identify one result of the program. People on the list were more likely to be arrested, prompting conclusion that it “essentially served as a way to find suspects after the fact5”.

That’s one of the biggest concerns about predictive policing. Some civil liberties groups argue that it just hides racial prejudice “by shrouding it in the legitimacy accorded by science.” If there’s a bias in the criminal justice system, that carries through to the statistics which are ultimately fed into the algorithms, says one analyst with the Human Rights Data Analysis Group and a Ph.D. candidate at Michigan State University. “They’re not predicting the future. What they’re actually predicting is where the next recorded police observations are going to occur.” In addition, with programs such as those used in Chicago and proprietary software like PredPol, the Human Rights Data Analysis Group stated “For the sake of transparency and for policymakers, we need to have some insight into what’s going on so that it can be validated by outside groups.”

Predictive Policing techniques such as the use of PredPol have shown promising results. But the ability to thoroughly validate the models through a third party has been challenging (with regards to analytics and public policies as well). With the advent of Big Data, predictive policing is still evolving, but civil liberties will have to be an integral part going forward. And at the end of the day, the analytics associated with predictive policing are just another set of tools, not an end all.

blog grid image

IMAGE RECOGNITION

Social media has transformed our way of communication and socialization in today’s world. Facebook and twitter are always on the lookout for more information about their users from their users. People eagerly share their information with the public which is used by the media agents to improve their business and services. This information comes from customers in the form of text, image or video. In the age of selfie, capturing every moment in the cell phone is a norm. Be it a private holiday, an earth quake shaking some part of the world or a cyclone blowing the roof over the head, everything is clicked and posted. These images are used as data by social media and researchers for image recognition, also known as computer vision.

Image recognition is the process of detecting and identifying an object or a feature in a digital image or video in order to add value to customers and enterprises. Billions of pictures are being uploaded daily on the internet. These images are identified and analysed to extract useful information. This technology has various applications as shown below. In this blog we will touch upon some of these applications and the techniques used therein.

Text Recognition

e will begin with the technique used to recognise a handwritten number. Machine learning technologies like deep learning can be used to do so. A brief note on AI, ML, DL and ANN before we proceed further. Artificial intelligence (AI) is human intelligence exhibited by machines by training the machines. Whereas Machine Learning (ML) is an approach to achieve artificial intelligence and deep learning is a technique for implementing machine learning. Artificial Neural Network (ANN) is based on the biological neural network. A single neuron will pass a message to another neuron across this network if the sum of weighted input signals from one or more neurons into this particular neuron exceeds a threshold. The condition when the threshold is exceeded and the message is passed along to the next neuron is called as activation1.

There are different ways to recognize images. We will use neural networks to recognize a simple handwritten text, number 8. A very critical requirement for machine learning is data, as much data as possible to train the machine well. A neural network takes numbers as input. An image is represented as a grid of numbers to the computer and these numbers represent how dark each pixel is. The handwritten text of number 8 is represented as below.

This 18x18 pixel image is treated as an array of 324 numbers. These are the 324 input nodes to the neural network as shown below.

The neural network will have two outputs. The first output will predict the likelihood that the image is an ’8’ and the second output will predict the likelihood that it is not an ’8’. The neural network is trained with different handwritten numbers to differentiate between ’8’and not an ’8’. So, when it is fed with an ’8’, it is trained to identify that the probability of it being an ’8’ is 100% and not being an ’8’ is 0%. So, now it can recognize ’8’ but only a particular pattern of 8. If there is a slight change in position or size, it may not recognise it. There are various ways to train it to identify ’8’ in any position and size. Deep neural network technique can be used to do so. To train better, we need more data and with increase in data, the network becomes bigger. This is done by stacking more layers of nodes and this is known as deep neural network. It does so by treating ’8’at the top separately from ’8’ at the bottom of a picture. This is avoided by using another technique called convolutional neural network. All these technologies are evolving rapidly with improved and refined approach to get better output.

Face Recognition

Face recognition is used to convey a person’s identity. It uniquely identifies us. Biometric face recognition technology has applications in various areas including law enforcement and non-law enforcement.

The conventional pipeline of face recognition consists of four stages4

Face detection is easier than face identification as all faces have the same features eyes, ears, nose, and mouth, almost in the same relative positions. Face identification is a lot more difficult as our face is constantly changing, unlike our fingerprints. With every smile, every expression our face gets transformed as the shape of our face contorts with our expression. Though humans can identify us even when we sport a different hairstyle, systems have to be trained to do so. Computers struggle with the problem of A-PIE or aging, pose, illumination, and expression. These are considered as sources of noise which make it difficult to distinguish between faces. A technique called deep learning helps reduce this noise and disclose the statistical features that the images of a single person have in common to uniquely identify that person.

DeepFace is a deep learning facial recognition system created by Facebook. It identifies human faces in digital images and employs a nine-layer neural net with over 120 million connection weights, and was trained on four million images uploaded by more than 4000 Facebook users5. This method reached an accuracy of 97.35%, almost approaching human-level performance.

Computer recognizes faces as collections of lighter and darker pixels. The system first clusters the pixels of a face into elements such as edges that define contours. Subsequent layers of processing combine elements into nonintuitive, statistical features that faces have in common but are different enough to discriminate them. The output of the processing layer below serves as the input to the layer above. The output of deep training the system is a representational model of a human face. The accuracy of the result depends on the amount of data, which in this case is the number of faces the system is trained on.

FBI’s Next Generation Identification (NGI)

FBI’s Criminal Justice Information Services (CJIS) Division developed and incrementally integrated a new system called the Next Generation Identification (NGI) system to replace the Integrated Automated Fingerprint Identification System (IAFIS). NGI provides the criminal justice community with the world’s largest and most efficient electronic repository of biometric and criminal history information6. The accuracy of identification using NGI is much less compared to Facebook’s DeepFace. One of the reasons is the poor quality of pictures that FBI uses. FBI normally uses the images obtained through public cameras which do not provide a face straight-on photograph. Whereas Facebook already has the information of all our friends and works with over 250 billion photos and over 4.4 million labelled faces compared to FBI’s over 50 billion photos. Thus, with more data Facebook has an edge in better identification. Facebook also has more freedom to make mistakes, since a false photo-tag carries much less weight than a mistaken police ID7. Facial recognition is of great use in automatic photo-tagging, but there is risk of false-accept rate while trying to identify a suspect and an innocent could be in trouble because of this.

Search and e-commerce

Google’s Cloud Vision API and Microsoft’s Project Oxford’s Computer Vision, face, and emotion APIs provide image-recognition solutions using deep, machine-learning algorithms to provide powerful ecommerce and retail applications that will enhance shopping experience of users and create new marketing opportunities for retailers8.

Cortexica9 uses its findSimilar™ software to provide services to retailers like Macy’s and Zalando. Cortexica does this by providing the retailer with an API. First, the images of all the items in the inventory are ingested in the software. The size and completeness of the dataset is important. Second, a Key Point Files (KPF) for each image, which is a proprietary Cortexica file, is produced. This file contains all the visual information needed to describe the image and help with future searches. Third, this system is then connected to the customer’s app or website search feature. Fourth, when the consumer sends an incoming query image, it is converted into a KPF, the visual match is computed and the consumer gets the matched results in order of visual similarity in couple of seconds.

This hot topic that is "visual search" is all driven by the alignment of consumer activity, with regards to their propensity to taking pictures, and the innovation of how retailers want their inventory to be discovered by consumers using their mobile devices. Facts like colour, texture, distinctive parts and shapes all need to be considered in designing the algorithm to meet the challenges of the broad range of retail fashion requirements.

Companies like Zugara11 use augmented reality (AR) shopping applications that allow a customer to try clothing in a virtual dressing room by overlaying an image of a dress or shirt and find what suits best. Here the app looks at the shopper via web camera and can capture the emotions of the consumer and send it to Google or Microsoft API for emotional analysis. Depending on the feedback from the API’s image analysis, the AR application can be guided to provide similar or different outfit to the customer12.

According to MarketsandMarkets, a global market research company and consulting firm, the image recognition market is estimated to grow from USD 15.95 Billion in 2016 to USD 38.92 Billion by 2021, at a CAGR of 19.5% between 2016 and 2021. The future of image recognition seems very interesting.

blog grid image

BLOCKCHAIN: FIFTH WAVE...

In the second part of the blog series lays out the disruptive potential of blockchain for financial services and how intermediaries end up with diminished roles. Proponents of the technology say that the scale of business transformation will be similar to that of the internet. The birth of the internet enabled exchange of data whereas world-wide adoption of blockchain will enable exchange of value e.g. trade, commerce, commodities etc.

The Fifth Wave

In recent history there have been four major technological game-changers for banks. The first technological disruptor was in the form of mainframe and personal computers that led to the first transformation of the banking industry with increased data storage and on-premise processing capabilities that boosted efficiencies across the board. The second wave was in the form of internet that took data access possible at the click of a button and hence allowed free flow of data around the world which made banks flush with useful and real-time information. The Third Wave was in the form of cloud computing that allowed banks to offload complex and cumbersome data processing offshore ensuring computing power was efficiently exploited on distributed platforms. Then came the fourth wave in the form of smartphones & associated apps that banks are still reeling under which has lowered the barriers of entry and created nimble banking startups powered by technology called Fintechs. The fifth wave that has further disrupted the banking landscape is the introduction of blockchain technology.

Financial Uses

From a banking standpoint the biggest advantage blockchain provides over traditional centralised system is disintermediation.

Payments/Remittance & Currency Exchange

In a traditional payments system, inter-bank payments are performed using a central counterparty and every bank has a local database which are not reconciled accurately. Also payments are performed by settling net obligations throughout the accounts recorded by a central counterparty. In addition cross border payments use multiple central counterparts (for different payment networks) and correspondents banks, where each bank has to maintain a reserve account with multiple payment networks.

Adoption of blockchain creates a instantaneous RTGS since execution happens in real-time in a peer-to-peer fashion, reducing counterparty risk, saving transaction costs and lowering settlement time to seconds (from the usual 2-3days). Striking out cental counterparts and correspondent banks from the chain and lowers capital requirements associated with such intermediaries freeing up financial resources for banking business. Also, it improves transparency and does away with reconciliaton of different databases, since a single ledger authorative state is obtained by consensus making compliance easier as access would be graned to regulators and auditors. Examples: Ripple - cross-border payment systems, Abra – P2P money transfer, Bitspark – end to end blockchain powered remittance services, Align – Payment Service Provider (PSP), Hellobit – bitcoin based remittance service for emerging markets, BitPesa – B2B payments

KYC/Identity Management

Know Your Customer (KYC) is a regulatory constraint that requires banks/financial institutions to verify client’s identity which is time consuming and requires lots of paperwork resulting in significant overhead costs. Moreover KYC involves storing and verifying documents pertaining to the client. Creation of SWIFT interbank registry was an attempt to centralise access to reliable data about customer’s identities.

Blockchain can significantly speedup the KYC process and save costs, by having a single cryptographic identity for every customer whereas customer data e.g. identity cards, passports, driving licenses etc securely and digitally loaded on to the blockchain available to all the banks. Documents once validated and authenticated will not need any further diligence hence saving the repeated verifications Also digital docs can be assigned a block-chain defined fingerprint which the banks can receive as proof of verified document to complete the KYC process. Examples: Tradle – KYC portability, Vogogo, Civic, Credits

Trade Finance

Financing of domestic and international trades involves two counterparties (seller and buyer of goods) and a third parties (bank/financial instution) to reduce two types of counterparty risks: (i) for buyer: sender not fulfilling good’s sending (ii) for seller: buyer not paying. Trade finance is complex and time consuming and involves the following processes: contract signing, buyer’s bank supplying LoC (letter of credit) to seller guaranteeing payment, seller receives Bill of Lading when goods are given to carrier, seller receives payment after giving bill of lading to bank, bank gives bill of lading to buyer who receives goods after showing it to carrier.

Blockchains can be used to dramatically automate and speed up the process by using smart contracts. Settlement via LoC process takes days which can sped up by embedding the LoC rules within the smart contracts. Also the blockchain will have buyer/seller account with funds and intera ctions between buyer and seller can happen real time as per the smart contract. Once carrier confirmation obtained as goods are sent this will automatically trigger release of funds to seller. In future IoT devices with sensors can be integrated to the blockchain to monitor the state of goods. Examples: Fluent, Skuchain, Tallysticks, Wave, Consentio, Chain of Things, Zerado

Securities Trade Lifecycle

Ideally securities trading life cycle takes about three days from trade execution to settlement and consists of 3 major steps:

Trade Execution: buyer/seller request orders to their respective brokers who can act on client’s behalf to submit orders on a exchange. Confirmation is sent to buyer/seller via broker when buy/seller orders are matched in terms of price and volume.

Trade Clearance: buy/sell orders are sent to a central clearing house which acts as buyer to seller and as seller to buyer. By doing so the clearinghouse guarantees trade execiution for both counterparties and removes counterparty risk from the trade.

Trade Settlement: buy/sell obligations are settled using netting which involves grouping orders into a single net transaction using the custodian as an internediary. Finally buyer settles his obligation with the custodian receives securities and seller gets paid.

Private blockchain controlled by consortium of clients, brokers and clearing firms as participants can have clearance and settlement being done on the blockchain which also serves as a depository of securities stored as digital assets rules and securities (bonds, shares etc.). Buy/Sell orders from clients are put on the blockchain by the brokers through smart contracts which automatically handle trading, matching and execution. Most importantly central clearing house as an intermediary loses relevance. Payments can be managed directly from the client accounts registered on the blockchain. Clearance and settlement rules are encoded within smart contracts eliminating reconciliation between participants thereby reducing clearing and settlement time from a T+3day cycle to minutes. The doing away off 3-day waiting period makes custodial services irrelevant and minimises complexity of collateral management. According to Santander adoption of blockchain can save as much as $20bn a year from overhead costs attached to clearing and settlememts. Examples: Digital Asset Holdings with Hyperledger, Overstock with T0, Epiphyte, Clearmatics, SETL

First Movers

Traditional banking reached a milestone in October 2016, when 88 bales of cotton were purchased for $35k in a cross border blockchain based trade. In the past 2 years, banks have seriously begun exploring blockchain technology for integration into their exisitng systems and have been using multiple options to do so: starting from inhouse labs to investing in blockchain startups to participating in platform based consortia to alligning with tech companies. Central banks too, on their part have begun to scratch the surface both from a regulatory and applicability standpoint.

Base Software Platforms

Ripple, created by blockchain start-up Ripple Labs, is payments, currency exchange network with a native currency called XRP (ripples). It offers a cryptographically secure end-to-end payment flow with instant transaction verification allowing banks to directly transact minus intermediaries (correpondent banks, central counterparties).

Ethereum, is an open source distributed computing platform featuring a native cryptocurrency, called Ether, and smart contract functionality with several functioning applications built on it. Thus it makes it possible for developers to build and publish next-generation distributed applications.

Consortia

R3CEV - is a consortium, including 70 of the world’s biggest banks/FIs, working towards deployment of blockchain technology. The consortium created Corda, an open-source distributed ledger platform, geared towards implementation in the banking industry.

Hyperledger Project – led by Liniux is a consortium of 40+ members including a mix of financial, technology and blockchain companies. It is focused on blockchain based protocols and standards witha modular approach that supports different appplications. Also it has allowed other players to incubate their projects within it e.g. Blockstream’s linconsensus, IBM’s Fabric

Banks

Initially Banks started exploring viability cryptocurrencies and created their own: Goldman Sachs’ SETL coin for instant post-trade settlement, Citibank’s CitiCoin, Bank of New York Mellon’s BKoins and JP Morgan’s bitcoin-alernative “web cash” payments system. UBS has a cryptocurrency lab attempting to build a enterprise-wide product “utility settlement coin” with Clearmatics and has also claimed 30-35 blockchain use cases. Bank of America is seeking 35 blockchain-related patents including a cryptocurrency wire transfer and transfer payment system. Banco Santander has an inhouse team called Cyrpto 2.0 that has claimed 20-25 use cases for blockchain. Citibank has invested in a blockchain start up called Cobalt. Barclays has partnered with Bitcoin exchange Safello to develop blockchain based services. Goldman Sachs and JP Morgan have investd in blockchain Axoni, that rivals R3, and addition to investing in another blockchain firm Digital Asset. Dutch Banks like ABN Amro, ING are exploring blockchain with Rabobank having partnered with Ripple. Australian/Kiwi banks CBA,ANZ Bank, Westpac have partnered with Ripple whereas Westpac’s VC arm Reinventure has invested in startup Coinbase. Indian banks like ICICI, Axis Bank and Kotak Mahindra have all tested blockchain transactions.

Central Banks

Central Banks from Canada to China, England to Europe, Sweden to Singapore are researching establishment of a central bank issued cryptocurrency that will allow finacial instruments (bonds, equities, land & car registries) to migrate to the sovereign blockchain. US Federal Reserve has just released its first reseacrh on blockchain technology. Deutsche Bundesbank, German central bank, in partnership with Deutsche Borse are develpong a blockchain prototype for securities settlement leveraging over Hyperledger platform. Singapore Central Bank, along with 8 local/foreign banks have been involved in a blockchain based pilot projet with the help of R3CEV. Bank of Japan and ECB (an inital sceptic) have partnered for a new joint research project to study the potential use cases of blockchain. Banque de France has also begun testing use of blockchain to establish identity of creditors within Single Euro Payments area. Cashless society is a reality in most Nordic countries which is why Nordic central banks are considerring going all in with digitial currencies.

Technology Firms

Microsoft has allied with Bank of America to to create a blockchain based framework that can be sold to other organisations. Also, Project Bletchley by Microsoft is a Azure based modular blockchain fabric. IBM has launched a cloud based blockchain service. Alphabet is backing Ripple, which includes other technology firms like Seagate Technology and Accenture. Accenture has even patented a blockchain that can be edited. Microsoft, Alphabet, Intel, IBM and Amazon are all making a play to bring financial services to their cloud. Ex-Google engineer is working on Vault OS, a blockchain based operating system for banks.

blog grid image

BLOCKCHAIN: FIFTH WAVE...

In a 2 part blog series the first part lays out key concepts surrounding this new technological breakthrough, called blockchain, the world has woken up to especially in the past few years.

Introduction

Honest recordkeeping of history has been a myth and records have always been fragile even as it transcended from oral history to written with the advent of writing systems. Writing gave us the ability to record & store messages but was based on a decaying medium like paper. Also paper-based records were difficult to authenticate in terms of “time stamp” or genuineness and was subjective as it conformed to the famous quote “History is written by victors”. During the turn of the last century, in came the computers which added durability to recorded information. However they were changeable adhering to the possible fancies of whoever controls the database and continued to be vulnerable to frauds.

Mankind may have just chanced upon the holy grail of recordkeeping with blockchain technology that is an immutable, cryptograhic and decentralised ledger system based on peer-to-peer mechanisms and consensus algroithms. In short, records via blockchain are set in stone and cannot be changed not even by a system administrator. Furthermore each record has been validated through consensus among peers and therefore less vulnerable to frauds and manipulation.

Brief History First Generation - Cryptocurrency

In August 2008, before the founder Satoshi Nakamoto came on to the scene, three now-forgotten individuals filed encryption patent application and registered the site Bitcoin.org. Two months later October 2008, Satoshi released a white paper on electronic cash with a vision to solve the problem of counterfeit money. Blockchain was born on 3rd January 2009 when the first block called Genesis Block, as part of the first cryptocurrency called Bitcoin, was created. Other than the transaction details, Genesis block also contained an extra data - a newspaper headline of that day “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”. A year since Satoshi’s white paper, on October 2009 Bitcoin received a valuation on traditional currencies with $1 being equivalent to 1,309 BTC taking into account cost of electricity to create a computer-generated Bitcoin. The first Bitcoin market was established in February 2010 whereas the first milestone for Bitcoin was achieved in August 2010 when Papa John’s pizza worth $25 was ordered using 10,000BTC. Bitcoin value reached parity with the US dollar for the first time in February 2011 and within four months 1BTC was equal to $31. Recently, BTC homed in close to $3000 level (currently 1BTC = $2,250) and has risen approx. 306-times against the USD completely reversing the situation of $1 = 1300xBTC during the last 7 years.

Second Generation – Smart Contract & Digital Asset

After the most famous application of blockchain and its first digital asset, Bitcoin, this technology can be extended to other business applications where different parties e.g. private individuals, corporates, public institutions or even automated devices (IoT) enter into a transactional relationship governed by contracts. Entries into the decentralised ledger will consist of computer code/protocol or “smart contracts” that execute the terms and conditions of such governing contracts between parties. Since Blockchain is also a centralised asset registry it can be used to register ownership of other digital assets like digital docs, digital bonds, digital commodities. This creates a huge potential for blockchain much beyond existing application of cryptocurrencies. Thus a second wave of innovators have come into the fray looking to tap into the huge business upside of creating blockchain based services and products leveraging on the combination of smart contracts and digital assets.

Key Concepts

Blockchain works on the philosophy of the absence of central authority where recorded transactions are grouped in blocks. Every blockcontains a timestamp and a reference to the previous block and hence creates a chain of blocks or blockchain. Every time a block is validated it is broadcasted to the wider network and added on top of the blockchain. Since, the whole network is based on a peer to peer communication therefore ever node has a local synchronised copy.

Blocks

Blocks are to a blockchain, what pages are to a file. Each block records the following main elements: (i) time stamp (ii) content about the transaction that are being confirmed in this block which will be broadcast after finalisation (iii) reference (or hash finger print) to the previous block (iv) statement of a new complex mathematical problem to be solved by the validators of blocks (also called miners). Thus blocks are aligned in linear sequence over time and each new block added to the end of chain.

Hash Function

It is a mathematical function that serves as a unique fingerprint for each block. A hash function should have 2 properties: (i) hard to back calculate to original data i.e. it should be more or less impossible to decipher and eavesdrop into the block data just by looking at the hash (ii) on changing the block data hash changes unpredictably thereby creating a situation that no 2 slightly different blocks should have same hash and hence any tampering is easily identified. Different blockchains use different hash functions for example Litecoin adopts SCRYPT whereas Bitcoin uses SHA256.

Data Distribution (Client-Server versus Peer-to-Peer)

Unlike a client-server model where sever holds 100% data and the client trust the data to be definitive, blockchain utilises a peer-to-peer (P2P) approach. In a P2P network, data is identically replicated on all the nodes which makes the node more independent and can continue operating even if it loses connectivity to rest of the network. Also this makes the network more robust to malicious attacks or malfunctions and therefore harder to close down. However it creates other issues like consensus building among peers and huge data storage requirements due to data replication.

Access

Public blockchain – Anyone can access the blockchain and can read or write (send transactions for validation) and participate in the consensus process (determining which blocks to be added and what the current state should be). It has incredible use cases for industry disruption, disintermediation and social infrastructure. Ex: Bitcoin, Ethereum, Dash, Lisk, Steem etc.

Private blockchain – closed and monitored ecosystem where ability to write and participation in the consensus process is extended only to an organisation (fully private blockchain) or a group of entities (consortium blockchain). Read permissions may be public or restricted by certain rules. Any solutions developed, the intellectual property rights is maintained within the consortium. Rules are established to align with the needs of an organisation or consortium and hence of huge interest for banks/FIs. Ex: Ripple Lab, Eris Industries, Chain, Blockstream etc.

Consensus Mechanism

This refers to the consensus algorithm used to attain a single state of the blockchain such that every node of the network has a local identical copy of the ledger. Consensus mechanism also guarantees transaction security and ledge integrity even if certain nodes become malicious i.e. do not follow protocol). Each node performs a certain portion of work to validate a block and running a consensus algorithm requires significant computing power and energy. Consequently it is extremely costly and difficult to manipulate a large part in order to control (or 51% stake) the blockchain. Three major consensus mechanisms to resolve blockchains with conflicting states:

1.Proof-of-Work: most common consensus mechanism based on nodes competing to a computationally hard problem solution. Network energy consumption is a major drawback. Examples: Ethereum, Bitcoin, Hyperledger, Dash, Steem.

2.Proof-of-Stake: creates a disincentive mechanism for nodes that do not follow the consensus protocol. Validators are required to put “at stake” a predefined amount of a digital asset betting on consensus process outcome, so that malicious nodes that do not follow the protocol end up losing those assets. Examples: Tendermint, Lisk

3.Byzantine-Fault-Tolerant: consensus method between authenticated validators is applicable to platforms that do not require large throughput yet demand many transactions. It is resilient to a Byzantine attack which refers to the possibility that a subset of the network nodes behave maliciously Example: Hyperledger

Cryptography

Unencrypted data – Public key (decryption code) accessible to every participant in the blockchain creating full and collective transparency which hurts confidentiality but speeds up dispute resolution.

Encrypted data – Accessible only to participants who have the appropriate private key (decryption code) and hence data viewership can be screened.

Hashed data – A “hash” serves as a digital fingerprint representing the veracity of a particular piece of data behind it but inferring hidden data from the hash is computationally impossible. Therefore the digital fingerprint can be shared throughout the blockchain without hurting confidentiality. Example: R3 CEV’s Corda product has adopted the hash approach.

Example - Longest Proof of Work Chain Rule

Proof-of-Work consensus mechanism used by Bitcoin blockchain is also called “longest chain rule” wherein the longest blockchain in terms of the most amount of work used (and not number of blocks) will stay. The “length” of the chain is measured by how much work it took to find each individual block. Let’s consider an example where all nodes of the network are synchronised to one state until Block 80. If 3 new competing blocks (81a, 81b, 81c), each with a slightly different transactions, are created at the same amount of time. This creates 3 different blockchains: Block A (1-80 + 81a), Block B (1-80 + 81b) and Block C (1-80 + 81c). Now the race begins for Block 82 to resolve the conflict, as different miners will try to mine the 82nd block for each of the 3 competing blockchains. If miners in Block B are the first to find Block 81b they will resolve the conflict as Block B becomes the longest blockchain (in terms of work) e.g. 1-82b whereas Block A and Block C will become “orphaned”.

blog grid image

ANALYTICS FOR DIABETES...

Analytics for Diabetes Prevention

Globally, there are over 415 million people living with diabetes today and more than 640 million of us may be living with diabetes by 2040 as estimated by International Diabetes Federation1. It is growing alarmingly in the world, specifically, India. India ranks among the top countries in the world with high diabetic population. One in every two adults with diabetes goes undiagnosed. If the growth in the number of diabetics is not intervened and prevented, the prediction of diabetes becoming one of the most leading causes of death is inevitable.

While PredPol’s predictive boxes predict that a crime will happen in the prediction area, there is no guarantee that an incident or arrest will occur. The presence of police officers in the prediction areas creates a deterrence and suppression effect, thus preventing crime in the first place.

PredPol does not collect, upload, analyze or in any way involve any information about individuals or populations and their characteristics – PredPol’s software technology does not pose any personal privacy or profiling concerns. The algorithm uses only three pieces of data – type, place, and time – of past crimes.

The Chicago Police Department Take Predictive Policing One Step Further

As with PredPol, the approach in predictive policing seeks to forecast where and when crime will happen; another focuses on who will commit crime or become a victim…

Predictive analytics for diabetes prevention

IBM Watson Health and the American Diabetes Association, ADA, have joined hands to create new digital tools that will ultimately envisage how diabetes is prevented, identified and managed. They aim to leverage the cognitive computing power of Watson and the association’s repository of diabetes clinical and research data to create digital tools for patients and providers. ADA has a repository of 66 years of data, which includes aggregated data about self-management, support groups, health activities and diabetes education3. The project includes training to understand diabetes data to identify potential risk factors and create recommendations for health decisions. The goal of the collaboration is to develop solutions that enable the diabetes community to optimize clinical, research and lifestyle decisions, and address important issues that influence health outcomes, such as social determinants of health4.

US researchers have earlier used the popular statistical modelling method called “proportional hazards model” to predict an individual’s risk of diabetes. These types of models predict the time that passes before some event occurs (in this case, the occurrence of diabetes).In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate.

For example, taking a drug may reduce one’s hazard rate by 50% for occurrence of diabetes, or, having higher than average carbohydrate intake may double its hazard rate for failure. The model identified a list of 7 factors/variables that are highly predictive of diabetes risk.The subjects were then scored using the trial data based on these factors.

The results revealed that people with scores in the top 25% were at highest risk of getting the disease. The trial population was divided into quarters of pre-intervention risk on the basis of model predictions and assessed. Patients at extreme predicted probabilities of developing diabetes should have a more straightforward decision about the benefit of treatment.

The aim was to use data analytics to predict which pre-diabetic patients would gain the most from which treatment approach, treatment with a drug that prevents diabetes, or from a lifestyle change such as weight loss or regular exercise.

This approach is known as the “precision medicine” approach in the healthcare sector. Participants were classified into risk pools based on the model’s prediction.

Effect of Lifestyle and drug (Metformin) on hazard risk7

(Referring to the picture above)

High risk participants:

- Were highly benefitted by the use of the drug, which reduced their risk of diabetes by 21%

- Lifestyle interventions reduced their chance of developing the disease by 28%

Low risk participants:

- No benefit from the drug

- Same intensive lifestyle change brought down their risk by only 5%

All participants:

Exercise and weight loss, with guidance from a health coach, benefited all to some extent, irrespective of their risk scores.

Many patients receive treatments unnecessarily with very low benefit. This issue can also be prevented by such analysis, thus reducing the healthcare cost involved. Customized tailoring of treatment for pre-diabetics and diabetics can improve the lives of all significantly. Doctors can be well-informed to determine the best treatment path for each patient as well as identify potential risk factors for each individual. Most importantly, the accuracy of the model is the key to determine the outcome of any project.

Many patients receive treatments unnecessarily with very low benefit. This issue can also be prevented by such analysis, thus reducing the healthcare cost involved. Customized tailoring of treatment for pre-diabetics and diabetics can improve the lives of all significantly. Doctors can be well-informed to determine the best treatment path for each patient as well as identify potential risk factors for each individual. Most importantly, the accuracy of the model is the key to determine the outcome of any project.

Various other organizations are trying to leverage big data and analytics to manage the disease by proper monitoring and controlling, so that optimal care is provided to the patients at reduced costs.

For example, Glooko, founded in 2010, provides a diabetes management platform which is sold directly to the healthcare units and insurance providers.8Patients can use the Glooko mobile app on their smartphones to enter information about their food intake or physical exercise to make appropriate decisions. Healthcare professionals can track and analyse a patient’s real-time progress to provide optimal care to the patient.