Uncategorized

Winning-with-personalization-in-financial-services-scaled
Uncategorized

Winning With Personalization In Financial Services

The Financial Services Industry has went through a complete transformation at its very core. While the good old days were marked by a few Giant corporations handling the whole industry, today’s age of agile startups has seen an influx of various small and medium FinTech companies that are lean and efficient – both in their operations as well as the customer experience. The Customers too, habituated to the relative ease of other services that underwent digital transformation, like calling a taxi or finding a hotel, expect much. Their demands are ever increasing and the cost of failure has become too high. So what is a guy supposed to do to keep up? Simple. Make things personal   Technology Has Made Things Personal Much Easier New tech like marketing cloud, big data and intelligent technologies such as artificial intelligence (AI) & Deep Learning (DL) are allowing companies to do much more than what they were previously capable of. They are used for developing and deploying sophisticated algorithms to make sure that customers are shown things they want to be shown. However, the Financial Services industry has been a bit late to the party. A recent Digital Banking Report found that “roughly 40 percent of all but the very largest financial institutions consider themselves ‘static,’ meaning they offer no personalization within their application”. That’s not cool. Not at all. It’s a pretty well known fact that customer experience (CX) through personalization drives more revenue and improves loyalty, affinity and lifetime value, which should all appeal to the money-conscious cohort that Financial Services is. Think Across Multiple Channels – Work For The Same Goal Poll a room full of individuals on whether they’ve visited a physical bank location in the last month and you’re likely to get only a small show of hands. Yet, a study found that the average customers have 10 digital interactions per month with their main bank. While interactions are happening less frequently at physical locations, banks can still have meaningful interactions with customers if the organization can move beyond thinking in channels. Most organizations are marketing to their target audience based on some basic principles and rules outlined per channel. In any case, if the bank’s activities are not closely monitored across various channels, they lose track of a customer when they moves from one channel onto the next—say from mobile to PC. Without a top level view of the customer’s journey, it’s pretty hard to deliver them an experience that seems both personal as well as seamless across multiple channels. All in all, what’s acting as a burden? Two things: separated information and the absence of a strategic viewpoint that covers the whole journey of the customer. Genuine personalization takes in constant behavioral information from everywhere – web browsing sessions, IoT In any case, to see every client’s journey, Financial Services establishments need to incorporate old as well as transactional information secured in Kiosks (like billied amount,etc.), and also event information, for example, missed installments. By joining information (behavioral, value-based and verifiable) into a solitary perspective of the customer, the bank can boost its conversion by being constantly present on all platforms and delivering a message tailored to both the customer as well as the platform. How To Scale Personalization In Financial Services – The Artificial Intelligence Way The National Business Research Institute surveyed 100 financial services executives and found that only 32 percent of the group were using AI technologies such as predictive analytics, image processing, recommendation engines, voice recognition, and response. Even amongst these 32 percent, most executives work for global level organizations. The main reason for this lies in the fact that there is a big absence of Top level Technocrats in these organizations. Most of their tech solutions are outsourced to agencies which have no idea how to build software that are smart. Personalization is essential for any organization of any scale – it’s direct impact on revenue is too positive to ignore. Traditionally,Banks have been putting  their customers in segments: They divide customers into groups and tailor offers and communications accordingly. Slicing customers into these segments with modern marketing tools makes them progressively smaller until the number explodes and becomes too complex to manage. Once this segment gets down to 1:1 , a phenomenon called “audience explosion” happens. The phenomenon is easy to understand (but tremendously hard to handle, at least manually). It become simply just too difficult to offer personalized recommendations to millions of customers. Even simple automation won’t suffice here which is why AI is used to serve each customer a platter of stuff they love the most. Some banks have already started to invest in virtual assistants to interact with their customers via chatbots, which predict and react to changes in customer behavior with AI. The power to serve every customer in this way across every channel gives banks unlimited potential to grow their business. Summing It Up Artificial Intelligence is a force – much akin to electricity. How you are using it is entirely upto you. But the one thing that is sure is that it WILL make your work both better as well as easier. Banks are just one of the many industries that have yet to understand the plethora of benefits AI is going to be bringing. Personalization has been a proven method of both incrementing revenue as well as making customers happy – and AI, is what is going to ensure that all Banks are doing personalization right.

steel pipelines and cables in a plant
Uncategorized

4 Use Cases of Predictive Analytics in Oil and Gas Industry

For oil and gas businesses operating at the highest levels of efficiency while keeping costs in control and increasing productivity is a challenging task. Since oil and gas organisations are asset rich in nature, equipment safety and reliability becomes significantly important. To limit downtime and minimize risks, oil and gas companies are leveraging industrial data and advanced analytics. This helps in predictive maintenance execution consequently empowering people to act before equipment failure occurs. 1) Smarter Maintenance: Oil and gas operations require a diverse set of complex and critical assets throughout the upstream, midstream and downstream processes. These assets include offshore pumping station, compressor, drilling rig, transportation equipment, pipeline booster station etc. Monitoring health and performance of such assets presents a substantial challenge when oil and gas operations are remotely located. Real-time asset health data and performance insights can be used to take informed decisions that drive efficiencies, mitigate risks and improve competitive advantage. Reactive Maintenance (RM): This is the most basic approach which involves letting an asset run until failure. It is suitable for non-critical assets that have little to no immediate impact on safety and have minimal repair or replacement costs so that they do not warrant an investment in advanced technology. Preventative Maintenance (PM): This approach is implemented in hopes that an asset will not reach the point of failure. The preventative maintenance strategy can be formulated on a: fixed time schedule or operational statistics and manufacturer/ industry recommendations of good practice. Preventative maintenance can be managed in the Enterprise Asset Management (EAM) or Computerized Maintenance Management System (CMMS). Condition-Based Maintenance (CBM): CBM is a proactive approach that focuses on the physical condition of equipment and how it is operating. CBM is ideal when measurable parameters are good indicators of impending problems. CBM follows rule-based logic where rule defines a certain condition and these rules do not change depending on loading, ambient or operational conditions. Predictive Maintenance (PdM): Predictive maintenance is implemented for more complex and critical assets. It relies on the continuous monitoring of asset performance through sensor data and prediction engines to provide advanced warning of equipment problems and failures. PdM typically uses Advanced Pattern Recognition (APR) and requires a predictive analytics solution for real-time insights of equipment health. Risk-Based Maintenance (RBM): RBM enables comprehensive decision making to plant operations and maintenance personnel using PdM, CBM and PM outcomes. This leads to a reliable and safe planning for maintenance and the operation of equipment or assets. 2) Predictive Analytics Predictive analytics together with PdM can lead to the identification of issues that may not have been found otherwise. According to research by ARC Advisory Group, only 18 percent of asset failures had a pattern that increased with use or age (Rio, 2015). This means that preventive maintenance alone is not sufficient to avoid the other 82 percent of asset failures, and a more advanced approach is required. Predictive analytics software keeps a track of historical operational signatures of each asset and compares it to real-time operating data to detect even the precise changes in equipment behavior. This approach helps in taking corrective actions by identifying changes in system behavior well before traditional operational alarms. 3) Health and Performance Optimization With Predictive asset analytics software solutions, oil and gas organizations get early warning notifications of equipment issues and potential failures which help them to take corrective measures and improve overall performance. How Predictive Asset Analytics software solutions work? The software learns an asset’s unique operating profile during all loading, ambient and operational conditions through the advanced modeling process. The result of the modeling process is a unique asset signature that is compared to real-time. This comparison uses operating data to determine and alert upon detection of subtle deviations from expected equipment behavior before they become problems that significantly impact operations. Such software are able to identify problems days, weeks or months before they occur and provide early warning notifications of developing issues. Together, this helps the plant and operations personnel to be proactive to reduce unscheduled downtime. This proactive approach leads to better planning and helps in reducing maintenance costs as parts can be ordered and shipped without rush and equipment can continue running. Other benefits include increased asset utilisation and the ability to identify under performing assets. Not only do companies improve their profitability by extending equipment life, lengthening maintenance windows, and increasing asset availability, other benefits are realised when considering the costs that “could have been,” including replacement equipment, lost productivity, additional man hours, etc., when a major failure is avoided. Another increasingly important benefit is the capability for knowledge capture and transfer. Predictive asset analytics solutions ensure that maintenance decisions and processes are repeatable even when organizations are faced with transitioning workforces, and the loss of experienced workers with critical institutional knowledge of the operations and maintenance of the organisation’s facilities. 4) Smarter Operations Internet of things has potential to create tremendous business value by enabling smarter equipment integration that creates increasing amounts of data. Oil and gas companies are faced with both challenges and opportunities to leverage that data to mitigate risk and improve productivity. With the help of predictive analytics, they can ascertain and comprehend actual and expected performance for an asset’s current ambient, loading and operating conditions. This information helps enterprises in: Source: http://software.schneider-electric.com/pdf/industry-solution/predictive-analytics-for-improved-performance-in-oil-and-gas/

Leveraging-Machine-learning-for-micro-finance-collections-scaled
Uncategorized

Leveraging Machine Learning for Micro Finance Collections

A loan passes through various stages or events from the moment it is given till the time it is repaid. Collection strategy of a loan for any financial institution is as important as its lending strategy and delays in repayments not only impacts the financer’s books, it also impacts the borrower as it is also reflected in their credit history SHG / JLG Collections A SHG (self help group) is a community based group with 5-20 members. Micro Finance Institutions typically offer group loans and individual loans that have standardized repayment structure. The repayment cycle could be weekly, monthly or fortnightly depending on the scheme and institutions. In a typical collection process either an MFI agent visits the borrower to collect the repayment in cash or the borrower walks to a physical branch to make the payments. How Data Science Helps Predictive analytics plays a key role in coming up with behavioral patterns to determine whether a customer is likely to default. A right collection model can be a driving factor behind a product’s collection efficiency. Simple classification model or a scorecard can be trained on the past data to help the collection team to identify the chunk of customers in the current portfolio who display a similar pattern to the ones who defaulted in the same product in the past. It can help the collection team to put more focus on these customers and align their efforts accordingly. This model will run at the start of every collection cycle and its frequency will be similar to the repayment frequency of the customer(weekly, monthly or fortnightly). The two major category of variables that can be used to identify this pattern are:   Credit Bureau Data Looking at customer’s credit data tells us the customer’s current market activity and his/her past credit history. Major variables that can help us identify our customer’s potential credit default include: Customer’s Internal Performance Data You also have repayment history of the customer with you. It can also be broken down into three types of variables Machine learning algorithms are fed all of this data from which they learn and create predictions. These algorithms can extract linear and nonlinear patterns in the data which will be difficult for a human(Collection team) to see. A multivariate machine learning model with hundreds of features can easily outperform a univariate rule based collection strategy. Use of applied machine learning can not only give you better results but also a clear interpretability and deeper insights for business to make better decisions. With the help of predictive analytics in collections MFIs can maintain good clean books and can aim to achieve higher profitability.

Smart industry robot arms for digital factory production technol
Uncategorized

Role Of Big Data and Machine Learning In Manufacturing Industry

Given the availability of a huge data pool, the manufacturing industry has started optimising areas that have the most impact on production activities with a data-driven approach. With access to real-time shop floor data manufacturing companies have the capability to conduct sophisticated statistical analysis using big data analytics and machine learning algorithms to find new business models, fine-tune product quality, optimise operations, uncover important insights and make smarter business decisions. In manufacturing, Machine Learning and Big Data techniques are applied to analyse large data sets for developing approximations regarding the future behavior of the systems, detect anomalies and identify scenarios for all possible situations. In this blog article, we take a look at how big data analytics and machine learning are transforming the manufacturing sector. Predictive Maintenance It is well understood that maintenance done at the right time reduces costs. One of the most impactful applications of Machine Learning in manufacturing has been that of predictive maintenance. The Industrial Internet of Things (IIoT) market is standing at an estimated $11 trillion and predictive maintenance can help companies save almost USD$ 630 billion over the course of the next 15 years. Machine Learning can provide valuable insights into the health of the machines and predict if a machine is going to experience a breakdown. This information can help companies take preventive measures instead of reactive measure and shorten unplanned downtime, excess costs and long term damage to the machine. Enterprises can leverage Machine Learning algorithms to analyse sensor data to improve the Overall Equipment Effectiveness (OEE) by improving equipment quality and the entire product line along with boosting shop floor and plant effectiveness. Preventive Maintenance or Condition Based Monitoring Given that manufacturing enterprises have a large install base of machines, they need to ensure that machinery does not break down when they need it most. With Preventive Maintenance or Condition Based Monitoring, they try to maintain the equipment in optimum working condition and prevent any unplanned downtime by detecting equipment failures before they happen and fixing them within stipulated time. Preventive Maintenance is a process of continuous machine monitoring where by using pre-defined parameters, patterns that indicate equipment failure can be tracked and machine failure predictions can be made on time. Condition Monitoring ensures that an equipment is running or is maintained by iterating parameter variances that are constantly monitored.. Quality Control In today’s regulatory landscape product quality is of paramount importance. Most manufacturers say product quality defines their success in the eyes of their customers. They constantly seek ways to reduce waste and variability in their production processes to improve efficiency and product quality. Leveraging principles of advanced big data analytics and machine learning concepts, manufacturing companies can capture sensor data from shop floor tools and equipment to take an increasingly granular and enterprise-wide approach to quality control. In addition, manufacturers will also be able to identify defects, uncover the root cause of problems, reduce the risk of shipping non-conforming parts, enable engineering improvements and determine which factors, processes, and workflows impact quality. Effective Supply Chain Management McKinsey predicts machine learning will reduce supply chain forecasting errors by 50% and reduce lost sales by 65% with better product availability. Supply chains are the lifeblood of any manufacturing business. Big data analytics and machine learning algorithms can help manufacturing companies assess the state of the supply chain and drive efficiencies with inventory optimization, demand planning, supply planning, operations planning, logistics etc. This allows manufacturers and suppliers to partner in a real-time environment to prevent keeping safety stocks levels high, adjust inventory positions to ensure that the right inventory is positioned at the right location to service customers better and prevent stock-outs, and also improve transportation logistics. Optimization of Operations A Gartner survey on the projected use of manufacturing analytics over next two years showed that 88% of companies plan on utilising data metrics to improve manufacturing responsiveness, 81% to improve capacity utilisation, 74% to understand their true costs, and 75% to make faster and better decisions. By making real-time adjustments manufacturing companies can optimise the operational efficiency of manufacturing assets. This involves managing production capacity by having a real-time view of equipment performance and production processes, along with identifying assets locations, including those of products and people. By leveraging machine learning and advanced analytics enterprises can assess demand forecasts and other parameters such as future raw material costs, the cost of manufacturing and distribution, working capital analysis etc. It directly helps in improving the quality of Sales and Operations Planning by supply chain network optimisation. Improvement of After Sales Service Manufacturers are coming to understand that their actions after making sales are as important as the efforts they put into preparing for the sale, which all has an increasingly significant impact on their company’s financial performance. A recent study found that 27% of manufacturing companies’ total revenues came from service. Another report suggested that an average gross margin of 39% could be attributed to after-sales service. Undeniably, high-quality service is important to obtaining financial success. Therefore,manufacturers will move away from using outdated technologies and business practices for inventory and after-sales management that provide little visibility and control. They will need to utilise predictive analytics to optimise their after-sales service and product parts business performance to improve customer loyalty, save time, and reduce costs. Customisation of Products The power shift from manufacturers to consumers described earlier is also driving investment in product customisation capabilities that are largely made possible by advancements in big data usage, machine learning and advanced analytics. When manufacturers provide tailored products, consumers provide extensive data about their preferences and behaviours that manufacturers can use to inform future product development. Big data analytics then allows companies to analyse customer behavior and develop methods of delivering products in the most timely and efficient way possible. We will thus see manufacturers moving data out of silos and creating a data ocean of customer information with the goal of becoming more agile and responsive in making products to individual requirements both in B2C and B2B environments. This contrasts with the traditional focus on

11-Questions-Business-Leaders-Need-to-Ask-Before-Preparing-AI-Strategy-1-1
Uncategorized

11 Questions Business Leaders Need to Ask Before Preparing AI Strategy

Undoubtedly, Artificial Intelligence (AI) can offer organizations a substantial competitive advantage if used in the right place and in the right circumstances. There is also considerable pressure on organizations to go the AI route for fear of losing an edge to competitors. This pressure can easily be felt by business leaders who need to craft and implement enterprise AI strategy. In a recently conducted survey (from Oct’17-Nov’17), majority of business leaders indicated Machine Learning (ML) and AI as their companies’ most significant data initiative for next year. 88 percent of respondents indicated that their company already has, or has plans to, implement AI and ML technologies within their organization. But, it is still not clear whether AI would bring productivity benefits or not, or whether they will have any impact on an organization’s revenues. So, what should organizations be thinking about? What questions company leaders should be asking before they push forward? Here are 11 key questions answers of which they need to find out: 1. Do you really need AI to solve this problem? Some automation and analytics use cases are simple enough that they can be solved with much simpler procedural code rather than building and maintaining an AI model. Enterprises need to figure out what they are trying to do and decide if AI is worth the investment. 2. How will AI improve your Customer Engagement? Businesses should leverage AI to deliver the right message at right time for right customer to significantly improve the customer engagement. By identifying low hanging fruit, high impact opportunities they can transform their brand for optimal customer engagement and make immediate and tangible difference in customer relevance. If cost reduction is an important driver, think about AI-powered chatbots to reduce mundane customer service tasks. 3. What is the organisation’s business case? If AI is deployed simply as a experiment without identifying and solving a specific business problem, then it would turn out to be “short-lived-no business value proposition” as leadership will not see any return on investment and people will simply stop using it and the entire technology will be dismissed as “not working for us.” 4. Do you have the necessary data? This is really a significant and important factor to be taken care of. Using AI involves being able to train a model on data. So, companies planning to use AI in coming future would really need to start thinking about data collection now without which AI will not be as effective. Enterprises need to understand the fact that AI is only as good as your data and goals allow it to be. Without framing a robust key performance indicator (KPI) and performance targets you would find yourself lost in corpus of data and wouldn’t understand the right way to optimize your actions to achieve desired results. 5. Do You Trust The Data Sources AI Will Use? One of the key questions that organizations need to answer is whether their data and date sources are suitable for AI. They should view data as a strategic business advantage and give attention about the data they’re collecting, how they’re storing it and how they can use it to create a personalised experience for their customers. 6. Is Your Data Architecture Suitable? While data is important, it is not enough. Organizations need to build a robust data strategy and ensure they have the right and effective data architecture in place. 7. Can Existing Data Management Systems Support AI? Can the existing data management systems hold up under the new load of artificial intelligence. AI systems use data as fuel and to have an effective AI model, this data shouldn’t be incomplete, inaccurate, or biased. That said, don’t wait for the data to be perfect — current AI is perfectly capable of determining what data works and what is too unreliable to use. 8. What Are the Consequences of Getting It Wrong? Sometimes, AI is all about statistics and finding the right correlation. In such cases, similar to humans, AI might produce wrong results depending on the data quality. So, business leaders need to think whether they want to implement AI in a process with a lot of variability that may have a lower accuracy rate and could have major consequences after getting it wrong. 9. What Are the Risks? AI comes in two distinct flavors — transparent and opaque, and both have very different uses, applications, and impacts for businesses and users in general. In some instances, businesses will need to employ a transparent form of AI that will explain the logic and exactly how they reach certain algorithmic-based decisions. 10. How Will This Impact Workers in Your Organization? AI’s rapid business adoption is expected to replace part of function of an employee’s role which might develop a negative perception and resistance for this change. Sustained AI adoption would require business leaders to involve employees and their line managers right from the start. Effective learning programs would help them take this change in a positive manner. 11. Will AI Integrate With Your Current Stack? AI solution should be integrated as part of a broader process not as standalone technology solution. AI, process and people should work together to make the business ecosystem more efficient and enhance the productivity, results and revenue.

Thumbs up down hands agree and disagree gesture
Uncategorized

5 AI Myths : Debunked

AI has received a lot of hype in the marketing community — and for good reason, too. As research and advisory firm Forrester Research notes in its report, “AI Must Learn the Basics Before It Can Transform Marketing,” AI-powered marketing applications promise numerous benefits, including efficiency and speed, smarter decision making, and optimized customer journeys and campaign performance. But this hype, or in some cases overhype, has caused some confusion within the marketing industry. Here are five Artificial Intelligence myths outlined in the report, and the truths behind them. Myth 1: AI Is New AI has actually been around for decades. John McCarthy, who’s been credited with coining the term, wrote a proposal on the subject matter back in the mid 1950s. The concept isn’t new to marketers, either. Joe Stanhope — Forrester VP and principal analyst — says companies like Rocket Fuel and MediaMath have leveraged AI for a while to optimize the purchasing of display ads. The media has been following the rise of AI, too. And as AI was brought into the limelight — winning Jeopardy matches, mastering the board game Go, serving as a main character in movies — people’s ideas of AI, and its originality, were altered. “People have in the back of their heads these preconceived notions of AI because they’ve grown up with it kind of in the background,” says Stanhope, who also authored the report. Myth 2: It’s About Complex Math and Algorithm While it can be easy for those without a PhD in mathematics to feel intimated by AI — and it is highly complex “under the hood” — Stanhope says the technology is more about the ingested data. “We tend to think about heavy-duty math and algorithms,” he says, “but, in fact, AI is really a data play.” Indeed, the report says marketers need to provide their AI systems with “accurate, updated, and complete data” for the technology to detect connections. They also need to establish a feedback loop, the report notes, to drive optimized results. Myth 3: AI Systems Work Instantaneously Stanhope compares AI systems to human babies: Both don’t know much in the beginning and need proper training to flourish. Indeed, just as how babies need time to learn how to walk and talk, AI systems need time to ingest companies’ data and key performance metrics and understand business problems. Myth 4: AI Will  Put Marketers Out Of Work While Forrester did predict that cognitive technologies like AI would replace 7% of U.S. jobs by 2025, Stanhope says he doesn’t expect marketers to completely turn over their jobs to computers anytime soon. “We do not see AI as a system that’s going to put marketers out of work,” he says. On the contrary, Stanhope says companies will still need marketers to input the necessary data and monitor the technology to ensure that it’s meeting KPIs and avoiding unintended consequences. He also expects companies to put a premium on creativity and content creation so that the machines will have enough variants to test. AI will simply ingest, analyze, and act on the data at velocities and speeds with which humans cannot compete, notes the report. In other words, it’s not a set-it-and-forget-it kind of technology. “Humans are still very much involved in this,” Stanhope says. “It’s a human-computer relationship. It’s very symbiotic.” Myth 5: AI Will Help Marketers Uncover Rich Insights About Their Customers AI systems aren’t customer insight solutions, Stanhope says. That’s because they’re powered by the customer data companies have. Indeed, Stanhope says AI systems are designed to optimize outcomes, not simply tell marketers what their customers like and dislike. “It’s not an AI system’s job to teach marketers about their customers,” he says. The report also notes that AI systems process so much data at such high speeds that marketers cannot expect a “play-by-play” of what it learns.

Predictive-analytics_4-assumptions-business-leaders-have-scaled
Uncategorized

Predictive Analytics: 4 Assumptions Business Leaders Have

Business leaders and stakeholders often think about right time to start looking at analytics and sometimes fall shy due to concerns surrounding data availability, quality of data, lack of resources and value of the overall exercise. We have been asked quite a few questions ourselves in last couple of months by decision makers across Insurance industry. Frequent ones are quoted below with response Assumption 1: We just have few thousand records, i am not sure if this is enough for any kind of predictive analytics. That’s a valid observation, for any predictive model to be successful we need to build and validate it on sufficient dataset. Generally you can have a fairly good model for 1000 records and atleast 100 events. Example 100 lapse in 1000 observed customers. As a thumb rule in addition to above point for each variable used for prediction there should be at least 20 records.Ex if 10 variables are used for prediction, minimum no of records expected are 10*20 i.e. 200. This whole process can help you identify deficiencies in data collection process, like missing values, invalid data or some additional variable should have been collected. Such interventions at early stage can be very helpful and can go a long way in improving data quality. Assumption 2: Our data quality is too bad, I don’t think we can do it right now Addressing data quality is core to the process of modeling. Data once imported is processed to bring it into meaningful shape to proceed for any further analytics. Availability of high computing power at lesser cost makes sure any size of data is small nowadays and can be processed in lower time and cost. Assumption 3: I am not too sure on the Return On Analytics The real fruit of analytics is not just in the scorecards or numbers but also in the way it is integrated and implemented within organization. Having an list of customers in excel scored on basis of lapsation might not be much useful but if it’s real time and integrated across IT ecosystem of web or mobile giving your agents, Customer Service team insights into consumer behavior every time he interacts with your firm, it becomes much more actionable. Think about product affinity ratings for customer integrated with Tablet app agents carry these days. Not only your agent will be able to push right product to the customer based on his needs but importantly build a long term relationship. Assumption 4: I already have basic predictive modeling initiatives running but not very effective. What more can I do! Basic premise of any analytics initiative is framing the right question, having the right data at hand and finally a strong actionable strategy. Doing this right will definitely result in good show. Once you have considered looking at internal data sources, you can also try adding external data sources like CIBIL, Social Media and economic indicators like Inflation, Exchange rate etc to glean information about financial behavior, consumer life style and events. Frame hypothesis which you would want to validate against external data sources and test them.

Human Resources HR management Recruitment Employment Headhunting Concept
Uncategorized

Customer Segmentation: Data Science Perspective

Organizations around the world strive to achieve profitability in their business. To become more profitable, it is essential to satisfy the needs of customers. But, when variations exist between individual customers how they can effectively do that. The answer is- by recognizing these differences and differentiating the customers into different segments. But how do organizations segment their customers? And in this article we’ll help you understand this from a data science perspective. What is customer segmentation? Customer segmentation is the process of dividing the customer base into different segments where Each segment represents a group of customers who have common characteristics and similar interests. As explained above, the exercise of customer segmentation is done to better understand the needs of the customer and deliver targeted products/services/content. With time, all sorts of organizations from e-commerce to pharmaceutical to digital marketing have recognized the importance of customer segmentation and are using it improve customer profitability. Customer segmentation can be carried out on the basis of various traits. These include : How to perform customer segmentation? Start with – Identifying the problem statement One of the foremost steps is to identify the need for the segmentation exercise. The problem statement and the output expectation will guide the process of segmentation. Example: In both the cases, the intent or need to perform customer segmentation is different. This will further determine the approach taken to achieve desired outcome. Gathering data Next step is to have the right data for the analysis. Data can come from different sources- internal database of the company or surveys and other campaigns. Other third party platforms like Google, Facebook, Instagram have advanced analytics capabilities to allow capture of behavioral and psychographic data of customers. Creating the customer segments Once you have defined problem statement, and gathered all the required data for it, the next step is to carry out the segmentation exercise. Key steps here will be: Data science and statistical analysis with the help of machine learning tools help organizations deal with large customer databases and apply segmentation techniques. Clustering, a data science method, is a good fit for customer segmentation in most of the cases. Usage of the right clustering algorithm depends on which type of clustering you want. Many algorithms use similarity or distance measures between data points in the feature space in an effort to discover dense regions of observations. Some of the widely used machine learning clustering algorithms are : Segmentation backed by data science helps organisations to forge a deeper relation with their customers.  It helps them to take informed retention decisions, build new features, and strategically positioning their product in the market.

Reviewing financial reports in returning on investment analysis
Uncategorized

Analytics: No Pain, No Gain

“Analytics is a journey and not a destination!! It takes considerable effort to frame that journey and execute it with a sense of purpose. You will encounter stumbling blocks that may threaten your initiative but you need to find a way out and keep marching ahead.” How is it like to build a data analytics strategy? We did a data analytics exercise for a US client recently in education domain that had all the flavors of roadblocks one can encounter on venturing into analytics territory. I intend to summarize those here along with solutions we found in collaboration with all stakeholders Takeaways This was just a month’s exercise. Surely we will hit many such scenarios ahead.

Data-Quality-Challenges-1
Uncategorized

5 Data Quality Challenges in Data Science

In this era when Data Science and AI are evolving quickly. Critical business decisions are being taken and strategies being built on the output from such algorithms, ensuring their efficacy becomes extremely important. When the majority of the time of any data science project is spent in data preprocessing it becomes extremely important to have clean data to work upon. As the old saying goes ‘Garbage in, Garbage out’, the outcome of these models is highly dependent on the nature of data fed in, hence data quality challenges in data science are becoming increasingly important. Challenges to Data Quality in Data Science Let’s understand this problem better using a case. Let’s say you are working for an Indian bank who wants to build a customer acquisition model for one of their products using ML. As with typical ML models, they need lots and lots of data and as the size of data increases, your problems with data also increases. While doing data prep for your model you might face quality challenges. Let’s look at a few of them one by one. The top most Common causes of data quality issues are: Duplicate Data: Suppose you are creating customer demographics variables for your model and you notice that there are a cluster of customers in your dataset that have exactly the same age, gender and pincode address, well this case is quite possible as there can be a bunch of people of the same age, gender living in the same Pincode, but you need to have a closer inspection at the customer details table and check if rest of the details(like mobile no, education, income, etc.) of these customers are also same or not. If they all are the same, it means it is probably due to data duplication. Multiple copies of the same records not only take a toll on computing and storing but also affects the outcome of the machine learning models by creating a bias. Inaccurate Data: Suppose you are working on location specific data, it can be quite possible that the pincode column you fetched contains some values which are not of 6 digits. This problem occurs due to Inaccurate data and it can impact your model where data needs to get aggregated at pincode level. Features with a high proportion of incorrect data should be dropped altogether from your dataset. Missing Data: There can be data points which might not be available for your entire customer base. Suppose your Bank started to capture the salary of your customers in the last one year only, customers who are associated with the bank for more than one year will not have their salary details captured. However important you might think this variable can be for your model, if it is not available for more than 50% of your entire dataset, it cannot be used in its current form. Outliers: Machine learning algorithms are sensitive to the range and distribution of attribute values. Data outliers can spoil and mislead the training process resulting in longer training times, less accurate models and ultimately poorer results.Correct outlier treatment can be the difference between accurate and an average performing model.  Bias in Data: Bias error occurs when your training dataset does not reflect the realities of the environment in which a model will run. Let’s understand this in our case, typically in acquisition models the potential customers on which your model will run and predict in future can be of two types, Credit experienced or new to credit. If your training data contains only credit experienced customers, your data will be biased and will fail miserably in the production settings as all the features which capture customers performance using the credit history(Bureau data) will not be present for new to credit customers. Your model might perform very well on experienced customers but will fail for the new. ML models are as good as data they are trained on, if the training data has systematic bias your model will also produce biased results. How To Address Now that we understand the data quality challenges, now lets see how we can tackle them and improve our data quality. But before going further let’s first understand that it is certain that data will never be 100% perfect. There will always be inconsistencies through human error, machine error or through sheer complexity due to the growing volume of data. While developing ML models there are few techniques that we can use to address these issues like: Apart from these techniques we can also add some logical rule based checks to validate the data if it reflects the real value with the help of domain experts. Also there exists a lot of software solutions in the market to manage and improve data quality in data science and help you create better machine learning solutions. Final Words Dirty data is the single greatest threat to success with analytics and machine learning and can be the result of duplicate data, human error, and nonstandard formats, just to name a few factors. The quality demands of machine learning are steep, and bad data can backfire twice — first when training predictive models and second in the new data used by that model to inform future decisions. When 70% to 80% time of a data scientist in any ML project is spent in the data preparation phase then ensuring that high-quality data is being fed into ML algorithms should be of the highest importance. As by each passing day, more and more data is being generated and captured, addressing this challenge right now is more important than ever. 

Scroll to Top