Predictive analytics in the context of "Analytics"

Play Trivia Questions online!

or

Skip to study material about Predictive analytics in the context of "Analytics"

Ad spacer

⭐ Core Definition: Predictive analytics

Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events.

In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision-making for candidate transactions.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<

👉 Predictive analytics in the context of Analytics

Analytics is the systematic computational analysis of data or statistics. It is used for the discovery, interpretation, and communication of meaningful patterns in data, which also falls under and directly relates to the umbrella term, data science. Analytics also entails applying data patterns toward effective decision-making. It can be valuable in areas rich with recorded information; analytics relies on the simultaneous application of statistics, computer programming, and operations research to quantify performance.

Organizations may apply analytics to business data to describe, predict, and improve business performance. Specifically, areas within analytics include descriptive analytics, diagnostic analytics, predictive analytics, prescriptive analytics, and cognitive analytics. Analytics may apply to a variety of fields such as marketing, management, finance, online systems, information security, and software services. Since analytics can require extensive computation (see big data), the algorithms and software used for analytics harness the most current methods in computer science, statistics, and mathematics. According to International Data Corporation, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021. As per Gartner, the overall analytic platforms software market grew by $25.5 billion in 2020.

↓ Explore More Topics
In this Dossier

Predictive analytics in the context of Verisk Maplecroft

Verisk Analytics, Inc. is an American multinational data analytics and risk assessment firm based in Jersey City, New Jersey, with customers in insurance, natural resources, financial services, government, and risk management sectors. The company uses proprietary data sets and industry expertise to provide predictive analytics and decision support consultations in areas including fraud prevention, actuarial science, insurance coverage, fire protection, catastrophe and weather risk, and data management.

The company was privately held until an initial public offering on October 6, 2009, which raised $1.9 billion for several of the large insurance companies that were its primary shareholders, making it the largest IPO in the United States for the year. The firm did not raise any funds for itself in the IPO, which was designed to provide an opportunity for the firm's casualty and property insurer owners to sell some or all of their holdings and to provide a market price for those retaining their shares. The 2009 IPO was priced at $22 per share for 85.25 million shares owned by its shareholders, including American International Group, The Hartford and Travelers, making it the largest since the 2008 IPO for Visa Inc. In an action described by investment research company Morningstar as a "vote of confidence" in Verisk, Berkshire Hathaway was the only company among the firm's largest shareholders that did not sell any of its stock in the October 2009 IPO.

↑ Return to Menu

Predictive analytics in the context of Machine learning

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.

ML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. The application of ML to business problems is known as predictive analytics.

↑ Return to Menu

Predictive analytics in the context of Data analysis

Data analysis is the process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.

Data mining is a particular data analysis technique that focuses on statistical modeling and knowledge discovery for predictive rather than purely descriptive purposes, while business intelligence covers data analysis that relies heavily on aggregation, focusing mainly on business information. In statistical applications, data analysis can be divided into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existing hypotheses. Predictive analytics focuses on the application of statistical models for predictive forecasting or classification, while text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a variety of unstructured data. All of the above are varieties of data analysis.

↑ Return to Menu

Predictive analytics in the context of Text analytics

Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005), there are three perspectives of text mining: information extraction, data mining, and knowledge discovery in databases (KDD). Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).

Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via the application of natural language processing (NLP), different types of algorithms and analytical methods. An important phase of this process is the interpretation of the gathered information.

↑ Return to Menu