Search for:
3d imaging market is expected to generate a revenue 55 77 billion by 2027 despite the covid 19 outbreak
3D imaging Market is Expected to Generate a revenue $ 55.77 Billion by 2027, Despite the Covid-19 Outbreak

The pandemic of COVID-19 has created a negative impact on the global 3D imaging market. The sustainability of the global market is mainly attributed to the advent of novel technological advancements in entertainment, healthcare, consumer electronics, and industrial automation. In addition, the emergence of 4D technology is projected to offer growth opportunities for the global 3D imaging market. Though the industries across the majority of the economies are completely shut-down, several market players are preferring for effective strategies to curb the impact of COVID-19. For instance, in May 2020, Tesco has introduced a 3D imaging system in Ireland to manage customer number and queuing. The technology was first deployed in Tesco’s 60 largest Superstore and Extra outlets to ensure an accurate steady flow of customers throughout the day. These key factors are projected to create lucrative opportunities in the pandemic situation. In addition, due to the crisis of the coronavirus disease, businesses are more concerned with customer optimism, and loyalty. Therefore, the majority of enterprises can go for the adoption of 3D imaging for finding ways to help clients through this severe situation; and this will significantly impact the global market, after the pandemic situation. Our reports include the following:

  • Technological Impact
  • Social Impact
  • Investment Opportunity Analysis
  • Pre- & Post-COVID Market Scenario
  • Infrastructure Analysis
  • Supply Side & Demand Side Impact

According to the latest publication of Research Dive, the global 3D imaging market is set to register a revenue of $55.77 billion by 2027, during the forecast timeframe.

Smartphones Segment shall have the fastest growth During the analysis Period

The segmentation of the market has been done on the basis of product type, image sensor, application, end-use industry, and region. The report offers valuable information on drivers, restraints, vital segments, lucrative opportunities, and global leaders of the market.

Factors Affecting the Growth

As per our analyst estimations, the versatility of 3D imaging in a broad range of industries such as entertainment, healthcare, security & surveillance fuelling the growth of the global 3D imaging market. However, the low product penetration in low and middle-income countries is projected to restrain the growth of the global 3D imaging industry, during the projected period.

Smartphones Segment shall have the fastest growth During the analysis Period

Based on the product type, the global market for 3D imaging is categorized into 3D Cameras, Sonography, Smart Phones, and Others. The smartphones segment is expected to rise at noteworthy CAGR, during the analysis period. The growth of the smartphones segment is because it is the rising popularity of capturing 3D objects and image processing via smartphones.

The Complementary Metal-Oxide Semiconductors will be the Most Lucrative

Based on the image sensor, the global market is fragmented into a charge-coupled device(CCD) and Complementary metal-oxide semiconductors. The complementary metal-oxide semiconductors (CMOS) will generate a remarkable revenue in 2027.

The Layout & Animation will have Rapid Market Growth, during the Forecast Period

Depending on the application, the global 3D imaging market is broadly categorized into 3D modeling, 3D scanning, layout and animation, 3D rendering, image reconstruction. The market size for the layout and animation is expected to rise at a remarkable CAGR by 2027. The huge investment in the R&D of this segment may expected to boost the growth of the market, over the forecast period.

Healthcare sector shall have a Major Market Share in the Forecast Period

on the basis of end-use industry, the global market for 3D imaging is broadly categorized into entertainment, healthcare, architecture & engineering, industrial applications, security & surveillance, and Others. The healthcare segment will held significant market share and is projected to register a remarkable revenue, in the forecast period. The increase in the geriatric population is one the major reason for the rise in the adoption of 3D imaging during the forecast period. 

Geographical Analysis and Major Market Players

Based on the region, the 3D imaging analytics market is segmented into North America, Europe, Asia Pacific, and LAMEA. The Asia-Pacific 3D imaging analytics market will register a significant revenue, over the forecast timeframe. The enormously rising government investments in 3D imaging solutions along with the increasing number of startups, mainly in China India, and South Korea, are expected to bolster the growth of the Asia-Pacific 3D imaging market in the global market.

 The leading players of the global 3D imaging market are

  • GENERAL ELECTRIC COMPANY,
  • Autodesk Inc.,
  • STMicroelectronics,
  • Panasonic,
  • Lockheed Martin,
  • Koninklijke Philips N.V.,
  • Trimble Inc.,
  • FARO Technologies, Inc.,

About Us:
Research Dive is a market research firm based in Pune, India. Maintaining the integrity and authenticity of the services, the firm provides the services that are solely based on its exclusive data model, compelled by the 360-degree research methodology, which guarantees comprehensive and accurate analysis. With unprecedented access to several paid data resources, team of expert researchers, and strict work ethic, the firm offers insights that are extremely precise and reliable. Scrutinizing relevant news releases, government publications, decades of trade data, and technical & white papers, Research dive deliver the required services to its clients well within the required timeframe. Its expertise is focused on examining niche markets, targeting its major driving factors, and spotting threatening hindrances. Complementarily, it also has a seamless collaboration with the major industry aficionado that further offers its research an edge. https://marketinsightinformation.blogspot.com/

Source Prolead brokers usa

homework assignment create a covid19 at risk score
Homework Assignment: Create a COVID19 At-Risk Score

Figure 1: The Art of Thinking Like a Data Scientist

I love teaching because the onus is on me to clearly and concisely communicate my concepts to my students.  As I told my students, if I am describing a concept and you don’t understand it, then that’s on me and not you.  But I do expect them to be like Tom Hanks in the movie “Big” and raise their hands and (politely) say “I don’t get it.”  If they don’t do that, then I’ll never learn how to improve my ability to convey important data, analytics, transformational and team empowerment concepts.

And that’s exactly what happened as I walked my Menlo College undergrad class through a “Thinking Like a Data Scientist” workshop.  One of steps in the methodology calls out the importance of creating “analytic scores” that managers and front-line employees can use to make informed operational and policy decisions (see Figure 1).

While folks who have credit scores understand the basic concept of how scores are used to augment decision making, if you’re an undergrad student, credit scores may not be something that has popped up on your radar yet. So, in order to make the “Creating Analytic Scores” step of the methodology come to life, I decided that we would do a group exercise on creating a “COVID At-Risk Death” score – a score that measures your relative likelihood of dying if you catch COVID19.

Analytic Scores are a dynamic rating or grade normalized to aid in performance tracking and decision-making.  Scores predict the likelihood of certain actions or outcomes and are typically normalized on a 0 to 100 scale (where 0 = bad outcome and 100 = good outcome).

Note:  Analytic Scores are NOT probabilities.  A score of 90 does not mean that there is a 90% probability of that outcome.  Analytic Scores measure the relative likelihood of an outcome versus a population. Consequently, a score of 90 simply means that one is highly likely to experience that outcome versus others in the population even if the probability of that outcome occurring is, for example, only 2%.

Sample scores can be seen in Figure 2.

Figure 2: Sample Scores by Industry

The true beauty of an Analytic Score is its ability to convert a wide range of variables and metrics, all weighted, valued and correlated differently depending upon what’s being predicted, into a single number that can be used to guide or augment decision-making. FICO (developed by the Fair, Isaac and Company) may be the best example of a score that is used to predict certain behaviors, in this case, the likelihood of an individual borrower to repay a loan or another form of credit. The FICO model ingests a wide range of consumer transactional data to create and update these individualized scores.  Yes, each FICO score is a unique measurement of your predicted ability to repay a loan as compared to the general population (see Figure 3).

Figure 3: Source: “What Is a Credit Score, and What Are the Credit Score Ranges?”

Everyone has an individual credit score. The score isn’t just based upon generalized categories like age, income level, job history, education level or some other descriptive category of data.  Instead, the FICO credit score analyzes a wide variety of transactional data such as payment history, credit utilization, length of credit history, new credit applications, and credit mix across a wide variety of credit vehicles.  From this transactional data, FICO is able to uncover and codify individual behavioral characteristics that are indicative of a person’s ability to repay a loan.

But what makes Analytic Scores particularly powerful is when you integrate the individualized scores into an Analytic Profile.  Analytic Profiles capture and codify the propensities, patterns, trends and relationships (codified by Analytic Scores in many cases) for the organization’s key human and device assets at the individual human or device level (see Figure 4).

Figure 4 Analytic Profiles: The Key to Data Monetization

Analytic Scores and Analytic Profiles power Nanoeconomics, which is the economic theory of identifying, codifying, and attaining new sources of customer, product, and operational value based upon individual human and device insights (or predictive propensities).  It is Nanoeconomics which enables organizations to transform their economic value curve so that they can “do more with less”, exactly the situation in which the world finds itself in the mass effort to vaccinate people and transform the healthcare industry (see Figure 5).

Figure 5: Transforming the Healthcare Economic Value Curve

My students and I decided to embark on a class exercise to see if we could create a simple score that measured the relative risk of any individual dying from catching COVID19.  We then discussed how we could use this score to prioritize who got COVID19 shots versus the overly-generalized method of prioritization that it being used today. Here is the process that we used in the class. 

Step 1: Identify Variables that Might Predict COVID19 Death.  I asked the class to research three variables that might be indicative or predictive of death if one caught COVID19.  We settled on the following three variables:  Obesity, Age and Gender.  There were others from which to select (e.g., pre-existing conditions, vitamin D exposure, exercise, diet), but I settled on these three because they required different data transformations to be useful.

Step 2:  Find Data Sources and Determine Variable Values.  Our research uncovered data sources that we could use for each of the three variables in Figure 6. 

Figure 6: Step 2: Find Data Sources and Transform into Usable Metrics

To measure Obesity, we used the Body Mass Index (BMI) where we input the individual’s height and weight and then calculate that individuals’ BMI and BMI classification (underweight, normal, overweight and obese).  To measure Age, we used the percentage of deaths by age brackets. And for Gender, we used Male = yes or no.

Step 3:  Calculate “Percentage Increased Risk Adjustment” per Variable.  Next, we adjusted the variables for increased COVID19 risks.  For example, we added a risk adjustment coefficient to BMI categories of overweight (BMI * 2) and Obese (BMI * 6) to reflect that it isn’t a linear increase in risk as one was more overweight.  We did the same thing with age, increasing the risk bias as one got into the older brackets.  Note:  this is an area where we could have used some machine learning algorithms (such as clustering, linear regression, k-nearest neighbor, and Random Forests) to have created more precise Increased Risk adjustments.

Step 4:  Input Relative Variable Importance.  Since not all variables are of equal weight in determining COVID19 At-Risk of Death, we allowed the students to assign relative weights to the different variables.  This provided an excellent opportunity to explain how a neural network could be used to optimize the variable weights, and discussed neural network “learning and adjusting” concepts like Stochastic Gradient Descent and Backpropagation

Step 5:  Normalize Input Variables.  I could have simplified the spreadsheet if I made the students entered percentages that totaled to 100%, but it was just easier to let them enter relative weights (between 1 to 100) and then to have the spreadsheet normalize the input variables to 100%.  Yea, I’m so kind-hearted.

Step 6:  Calculate Population At-risk Scores per Variable.  Next, we calculated a population score (using maximum values) for each of the three variables in order to provide a baseline against which to judge the level of the individual’s at-risk variables.

Step 7:  Calculate Individual At-risk Scores per Variable.  We then calculate the score for each of the three variables for the individual based upon their inputted data:  height, weight, age and gender.

Step 8:  Normalize Individual At-Fisk Scores against Population At-Risk Scores.  Finally, we normalized the individual’s score against the population score to create a single number between 0 to 100 (where a score of 100 is highly at-risk and a score near 0 is very low at-risk).

The process and final spreadsheet is captured in Figure 7.

Figure 7: COVID19 At-Risk of Death Score Process and Spreadsheet

Now that the data and transformations are in the spreadsheet, the students could play with the different transformational variables like increasing the Risk factor for Obesity, the relative weights of the different variables, and even the input variables. And while most of the variables are not adjustable (your height, age and gender are hard to change…but I guess it is possible…), weight was certainly a variable that we could adjust and used this as an opportunity to see its impact on an individual’s at-risk score (yea I know, I need to lose weight…).

Analytic Scores are a powerful yet simple concept for how data science can integrate a wide variety of metrics, transform those metrics into more insightful and predictive metrics, create a weighting method that gives the most relevant weights to the most relevant metrics, and then munges all of those weighted and transformed metrics into a single number that can be used to guide and augment decision making.  Plus, one can iteratively build out the Analytic Score by starting small with some simple metrics and analytics, and then continuously-learn and fine-tune the Analytic Score with new metrics, variables and analytics that might yield better predictors of performance. Very powerful stuff!

One final point about Analytic Scores.  One cannot make critical policy and operational decisions based upon the readings of a single score.  To really leverage Analytic Profiles and Analytic scores to make more informed, granular policy and operational decisions (and activate the power of Nanoeconomics to do more with less), we would want to couple the “COVD19 At-Risk Death Score” with a “COVID19 At-Rick of Contracting Score” that measures the likelihood of someone catching COVID19. Why prioritize someone high based upon the “Death” score if their likelihood of catching COVID19 is low (i.e., live in a remote, sparsely populated location, work from home and are adamant about wearing a high-quality, N95 mask and practicing social distancing when in public).  Heck, one might even want to create a “COVID19 At-Risk of Transmission” to measure someone’s likelihood to transmit the virus.

If you are interested in the resulting spreadsheet, please contact me via LinkedIn and I will send the spreadsheet to you. You brainiacs out there will likely uncover better data sources and better variable transformations that could improve the accuracy of the COVID19 At-Risk spreadsheet.  And if you create a better formula than the one that we created (which won’t be hard), please share the spreadsheet with me so that I can incorporate it into my next class.  Hey, we are learning together!

Source Prolead brokers usa

causal ai dictum a dataset is model free
Causal AI dictum: A dataset is model-free

capcha-bugs

Time and time again, Judea Pearl makes the point on Twitter to neural net advocates that they are trying to do a provably impossible task, to derive a model from data. I could be wrong, but this is what I think he means.

When Pearl says “data”, he is referring to what is commonly called a dataset. A dataset is a table of data, where all the entries of each column have the same units, and measure a single feature, and each row refers to one particular sample or individual. Datasets are particularly useful for estimating probability distributions and for training neural nets. When Pearl says a “model“, he is referring to a DAG (directed acyclic graph) or a bnet (Bayesian Network= DAG + probability table for each node of DAG).

Sure, you can try to derive a model from a dataset, but you’ll soon find out that you can only go so far.

The process of finding a partial model from a dataset is called structure learning (SL).  SL can be done quite nicely with Marco Scutari’s open source program bnlearn. There are 2 main types of SL algorithms: score-based and constraint based. The first and still very competitive constraint-based SL algorithm was the Inductive Causation (IC) algorithm proposed by Pearl and Verma in 1991. So Pearl is quite aware of SL. The problem is that SL often cannot narrow down the model to a single one. It finds an undirected graph (UG), and it can determine the direction of some of the arrows in the UG, but it is often incapable, for well understood fundamental –not just technical– reasons, of finding the direction of ALL the arrows of the UG. So it often fails to fully specify a DAG model.

Let’s call the ordered pair (dataset, model) a data SeMo . Then what I believe Pearl is saying is that a dataset is model-free or model-less (although sometimes one can find a partial model hidden in there). A dataset is not a data SeMo.

Sample usage of term data SeMo: The vast library of data SeMo’s in our heads allows us to solve CAPTCHAs quickly and effortlessly. What fun!

Source Prolead brokers usa

the role of iot and big data in payroll process for businesses
The Role Of IoT and Big Data In Payroll Process For Businesses

payroll

Image(source)

“Hey, Siri! How many steps did I walk today?” “Hello, your average step count for the day is 20,000”. Internet of things, popularly termed IoT, has become such an integral part of our lives these days. People and machines these days are secretly linked to one another. We have gotten used to counting on it so much; we never fail to use it at least once every day. Statistics state that 127 devices get connected to the internet every second. Let’s take smartwatch bands, for instance. While some wear it only to look cool, these watches are specially curated to meet health-conscious people’s needs. They allow us to track our blood pressure, heart rate, daily step-count, and calorie count. No matter how much they curse the internet, the idea of living without it has become nearly impossible in this tech-savvy world.

Businesses have realized the awareness of big data and how it will help them nurture their business activities. Organizations use real-time data to predict profits, analyze their commodities and the necessary changes to be made, and improve decision-making. Research confirms that nearly 90% of the overall data stored is generated in the last two years. What was before just a field of IT is now an essential part of every business enterprise.

Let us now understand the meaning of big data and IoT.

What Is Big Data

Big Data can be referred to as the growing number of data, or information, that is being stored on the web, viz, internet. It can store and process such huge chunks of data that it is impossible for manual data processors, i.e., humans, to process it. Companies make use of this feature to store their employee information and other data related to their finances. Organizations use it as a part of their cloud-based payroll solution to make money-related decisions. An example of big data is social networking sites collecting information from comments, pictures, and videos.

What Is IoT

IoT can be defined as all the gadgets around us that are linked with the internet. It is responsible for gathering data and making it available. It works by the sensors that can be added to tools like smartwatches, which will provide fresh and new data without the help of physical labor. IoT is making us digital and smart by mixing technology and humans. Example of this includes refrigerators or air-conditioners connected to our mobile devices through the help of applications.

Benefits Of Using Big Data In Businesses

Big data lends a hand to businesses in several ways. Some of them are-

Helps Retain Talent

Remember how in the old days when manual HR used to struggle to find candidates with the right talent and retain them? Well, now that big data has come to their rescue, it has become a no-brainer for them. Not only will it find, filter, and interview candidates based on the organization’s requirements, but it will also help them hold on to worthy employees. Organizations can collect data from employees regarding their needs, and they can work towards achieving them in order to retain those employees.

Big data also helps companies with finding new talent. Its robust features help organizations become stronger by collecting honest employee reviews, past revenue histories, sales, and profits, and share this information with new hires. This strategy will attract new candidates to the organization and guide the managers in finding them.

Ensures Proper Data Management

Business organizations that are active generate bytes and bytes of information every day. Therefore, it is essential that the sources, i.e., the automated devices they rely on, are safe and ensure regular data storage and management. One of the big data tools includes providing companies a secure space to save and access the information stored. This includes confidential data like payroll information and employee data like attendance regularization, leave and overtime, taxes, and other related terms.

This is a blessing to the management and the employees since it ensures smooth data flow. Management can make quick and fact-based decisions through real-time data, while the workforce can view their salary payslips, therefore increasing their work efficiency.

Finds And Corrects Errors

The days when organizations had to rely on physical processes of handling their monetary issues are long gone. Big data automation provides companies with mistake-free financial statistics. Businesses that rely on manual HR face a problem with it since physical HR doesn’t ensure error-free processing. On the other hand, the operating system makes the department’s work easy and helps companies rely on the information.

It will highlight where the errors are happening, guide employees with correcting them, and ensure such mistakes don’t happen again. This builds the roots of the financial management of the organization strong. It will sharpen the staff’s payroll skills and increase productivity among employees because ensuring zero errors would mean them getting paid fairly.

Aids In Decision-making

The role big data plays in the decision-making of the organization is utterly crucial. It has the capability to dig deeper than any physical labor and produce data that is impossible to process. It provides organizations with necessary information at godspeed. This is what helps companies in making quick, real-time decisions.

Businesses can also make proper use of this feature to prepare for any issues they might face since the data will show them where they are headed and prepare them for the same. It will alert the organizations of any potholes that come in the way they are headed and provide them with alternative solutions.

Enhancing Employees’ Development

It is essential that organizations help their employees enhance their experience within the company. In order to do that, they must develop their job by propelling them into training and development. This is vital for both enterprises, and the workforce since following the same schedule will lead to them losing interest in what they do, leading to less productivity. Therefore, to keep it stagnant, they must take this step.

Big data uses employee information to provide solutions to companies regarding their development from which they can plan out the training procedures for every individual employee based on how they want to evolve.

Benefits Of Using IoT In Businesses

Following is how beneficial IoT is to business organizations-

Collects Real-time Data

Collecting data all the time is essential for the management since it will lead to the development of the organization. They can get hold of the necessary information with the help of collaborative tools like Hangouts, Email, etc. Doing this will help the top management in making crucial decisions and lead to the business’ development.

IoT gadgets like CCTV cameras, PCs linked to the organization’s server host, etc., can aid the HR department with the same. These tracking tools detect the called for data and send it to the managers who are in need of it.

Tracks Employees’ Work

With the latest technology tools, tracking and managing employees’ work has become a cakewalk for organizations. They can track employees and identify the factors that might distract them from working. Since all the work employees do has now shifted virtually and businesses have started going green considering the current environmental situation, work tracking has become much more manageable. Organizations can know how much work each employee does every day with such tools’ help.

For example, by adding a sensory device on a content writer’s keyboard, the company can know exactly how many words he writes every day. This reduces the chances of them stealing from the company. This helps the managers place the right employees in the position(s) that fits them best.

Automates Monetary Process

IoT helps A LOT with processing the organization’s finances. Tracking tools like GPS can lend a hand with managers getting information about the employees’ absent days and their leaves. The location sensor will provide precise data and track the number of hours the employee has worked. This helps the managers in giving salaries to workers based on the same. With tools like a strong calculating processor that ensures zero errors, it lends great help to organizations to process employees’ salaries.

Looks Out For The Workforce

It is obvious that a fit and fine worker will contribute more and give more money to the organization than an ill one. Organizations need to ensure that the health of their employees is retained and they don’t face any health issues or stress at work. They can monitor this by giving them smartwatches and track their overall health. These wearables can track their efficiency and help the businesses point out the issues faced and provide solutions to fight them.

Final Note

Modern tools like big data and IoT have entirely changed the way businesses function these days. Organizations can make use of these instruments to develop the business and help their employees evolve with it.

Source Prolead brokers usa

dsc weekly digest 22 march 2021
DSC Weekly Digest 22 March 2021

One of the first jobs that I had after college (in the midst of a recession) was working as a typesetter for a small company in Florida in the latter 1980s. Having spent almost every waking hour on computers during school, this was hardly the kind of job where you’d expect there to be an issue about job security. Working on the then-current Linotronic hardware, my job was to markup text to be formatted on film using computer codes, which gave me an early insight into work I’d be doing a decade later with XML.

Things went swimmingly for the first six months or so, until a small company called Aldus, out of San Jose, California released a program called Pagemaker for the new Macintosh computer. For many companies, Pagemaker was a game-changer, making it possible to create professional-quality content visually in real-time. For our small typesetting firm, it was the End Times. The company went from having a revenue of $10 million when I started to less than $150,000 when I was finally laid off with the rest of the staff a year later. For me, it provided a window into understanding how quickly technology can completely rewrite the landscape.

The field of data science has changed dramatically since DSC first began in 2012. The niches that opened up after those first few years have largely been filled, and competition for baseline data science jobs has increased even as salaries have dropped. Knowing what a Bayesian is or how to correct for skewed distributions is no longer enough, and in many cases, work that used to require R or Python working in an IDE has now become integrated into mainstream BI tools, available at the click of a button.

That does not mean that there are no data science positions out there, only that many of them are increasingly specialized. In essence, the future of the data scientist is where it’s always been, as a subject matter expert who has the knowledge and experience to use the tools of data science, machine learning, and AI in order to better understand and interpret their own domain. This shouldn’t be surprising, but for all too many people who want to be data scientists first, I’d make the case that they should look upon data science as a toolset, a set of skills that all researchers and analysts should have.

This is why we run Data Science Central, and why we are expanding its focus to consider the width and breadth of digital transformation in our society. Data Science Central is your community. It is a chance to learn from other practitioners, and a chance to communicate what you know to the data science community overall. I encourage you to submit original articles and to make your name known to the people that are going to be hiring in the coming year. As always let us know what you think.

In media res,
Kurt Cagle
Community Editor,
Data Science Central

Source Prolead brokers usa

r vs python vs julia how easy it is to write efficient code
R vs. Python vs. Julia: How easy it is to write efficient code?

In my last post, I have compared R to Julia, showing how Julia brings a refreshening programming mindset to the Data Science community. The main takeaway is that with Julia, you no longer need to vectorize to improve performance. In fact, good use of loops might deliver the best performance.

In this post, I am adding Python to the mix. The language of choice of Data Scientists has a word to say. We will solve a very simple problem where built-in implementations are available and where programming the algorithm from scratch is straightforward. The goal is to understand our options when we need to write efficient code.

Experiments

Let us consider the problem of membership testing on an unsorted vector of integers:

julia> 10 ∈ [71,38,10,65,38]
true
julia> 20 ∈ [71,38,10,65,38]
false

I implemented the linear search algorithm in R, Python and Julia, and compared CPU times against a C implementation (1.000 searches over an array with 1.000.000 unique integers). Several flavours of implementations were tested:

  • Built-in functions/operators (in, findfirst);
  • Vectorized (vec);
  • Map-reduce (mapr);
  • Loops (for, foreach).

Results

Looking to results side by side for this simple problem, we observe that:

  • Julia’s performance is close to C almost independently on the implementation;
  • The exception in Julia is when writing R-like vectorized code, with performance degrading about 3x;
  • When adding JIT compilation (Numba) to Python, loop-based implementations got close to Julia’s performance; still Numba imposes constraints on your Python code, making this option a compromise;
  • In Python, pick well between native lists and NumPy arrays and when to use Numba: for the less experienced it is not obvious which is the best data structure (performance-wise), and there is no clear winner (especially if you include the use case of adding elements dynamically, not covered here);
  • R is not the fastest, but you get a consistent behavior compared to Python: the slowest implementation in R is ~24x slower than the fastest, while in Python is ~343x (in Julia is ~3x); 
  • Native R always performed better than native Python;
  • Whenever you cannot avoid looping in Python or R, element-based looping is more efficient than index-based looping.

A comprehensive version of this article was originally published here (open access). 

Source Prolead brokers usa

formulating your problem as a reinforcement learning problem
Formulating your problem as a reinforcement learning problem

This blog is the first part of a three-blog series, which talks about basics of reinforcement learning (RL)and how we can formulate a given problem into a reinforcement learning problem.

The blog is based on my teaching and insights from our book at the University of Oxford. I also wish to thank my co-authors Phil Osborne and Dr Matt Taylor for their feedback to my work.

In this blog, we introduce Reinforcement learning and the idea of an autonomous agent.

In the next blog, we will discuss the RL problem in context of other similar techniques – specifically Multi-arm bandits and Contextual bandits

Finally, we will look at various applications of RL in the context of an autonomous agent

Thus, in these three blogs – we consider RL, not as an algorithm in itself but rather as a mechanism to create autonomous agents (and their applications)      

This series will help you understand the core concepts of reinforcement learning and encourage you to build and define your problem into an RL problem.

What is Reinforcement Learning?“It is a field of Artificial Intelligence in which the machine learns in an environment setup by trial and error methods. Here the machine is referred to as an agent that performs certain actions and for each valuable action, a reward is given. Reinforcement learning algorithm’s focus is based on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

Understanding with an example . . .

Let’s go with the most common yet easy example to understand the basic concept of Reinforcement learning. Think of a new dog and ways with which you train it. Here, the dog is the agent and its surroundings become the environment. Now, when you throw a frisbee away, you expect your dog to run behind it and get it back to you. Here, throwing away the frisbee becomes the state and whether or not the dog runs behind the frisbee will depict its action. If the dog chooses to run behind the frisbee (an action) and get it back, you will reward him with a cookie/biscuit to indicate the positive response. If otherwise, some punishment can be given in order to indicate the negative response. That’s exactly what happens in reinforcement learning.

This interactive method of learning stands on four pillars, also called “The Elements of Reinforcement Learning” –

  • Policy – A policy can be termed as a way of tackling agent’s learning behaviour at a given instance. In a more generic language, it is a strategy used by agent towards its end goal. 
  • Reward – In RL, training the agent is more like luring it to a bait of reward points. For every right decision an agent makes, it is rewarded with positive points, whereas, for every wrong decision an agent makes, a punishment or negative points are given.
  • Value – The value function works upon the probability of achieving the maximum reward. It is an algorithm that determines whether or not the current action in a given state will yield or help yield best reward.
  • Model (optional) – RL can either be model-free or model-based. Model-based reinforcement learning helps connect the environment with some prior knowledge i.e. it comes with a planned idea of agent’s policy determination with integrated functional environment.

Formulating an RL problem . . .

Reinforcement learning is a general interacting, learning, predicting, and decision-making paradigm. This can be applied to an application where the problem can be treated as a sequential decision-making problem. For which we first formulate the problem by defining the environment, the agent, states, actions, and rewards. 

A summary of the steps involved in formulating an RL problem to modelling the problem and finally deploying the system is discussed below –

  • Define the RL problem – Define environment, agent, states, actions, and rewards.
  • Collect data – Prepare data from interactions with the environment and/or a model/simulator.
  • Feature engineering – This can probably be a manual task with the domain knowledge.
  • Choose modelling method – Decide the best representation and model/algorithm. It can be online/offline, on-/off-policy, model-free/model-based, etc.
  • Backtrack and refine – Iterate and refine the previous steps based on experiments.
  • Deploy and Monitor – Monitor the deployed system 

RL framework – Markov Decision Processes (MDPs)

Generally, typical reinforcement learning problems are formalized in the form of Markov Decision Processes, which acts as a framework for modelling a decision-making situation. They follow the principles of Markov property, i.e. any future state will only be dependent on the current state and independent of past states, and hence the name Markov decision process. Mathematically, MDPs are derived consisting of following elements –

  • Actions ‘A’,
  • States ‘S’,
  • Reward function ‘R’,
  • Value ‘V’
  • Policy ‘π’

where, the end goal is to get the value of state, V(s), or the value of state-action pairs, Q(s,a) while there is a continuous interaction of the agent and environment space.

In the next blog, we will discuss the RL problem in context of other similar techniques – specifically Multi-arm bandits and Contextual bandits. This will expand on the problem of using RL to create autonomous agents. In the final part, we will talk about real-world reinforcement learning applications and how one can apply the same in multiple sectors.

About Me (Kajal Singh)

Kajal Singh is a Data Scientist and a Tutor at the  Artificial Intelligence – Cloud and Edge implementations  course at the University of Oxford. She is also the co-author of the book  “Applications of Reinforcement Learning to Real-World Data: An educational introduction to the fundamentals of Reinforcement Learning with practical examples on real data (2021)”

References –

Source Prolead brokers usa

data agility and popularity vs data quality in self serve bi and analytics
Data Agility and ‘Popularity’ vs. Data Quality in Self-Serve BI and Analytics

One of the most valuable aspects of self-serve business intelligence is the opportunity it provides for data and analytical sharing among business users within the organization. When business users adopt true self-serve BI tools like Plug n’ Play Predictive Analysis, Smart Data Visualization, and Self-Serve Data Preparation, they can apply the domain knowledge and skill they have developed in their role to create reports, analyze data and make recommendations and decisions with confidence.

It is not uncommon for data shared or created by a particular business user to become popular among other business users because of a particular analytical approach, the clarity of the data and conclusions presented or other unique aspects of the user’s approach to business intelligence and reporting. In fact, in some organizations, a business user can get a reputation as being ‘popular’ or dependable and her or his business intelligence analysis and reports might be actively sought to shape opinion and make decisions. That’s right, today there is a social networking aspect even in Business Intelligence. Think of it as Social Business Intelligence or Collaborative Business Intelligence. It is a new concept that we can certainly understand, given the modern propensity for socializing and sharing information that people want to share, discuss, and rate and they want to understand the context, and views and opinions of their peers and teammates.

By allowing your team members to easily gather, analyze and present data using sophisticated tools and algorithms (without the assistance of a programmer, data scientist or analyst), you can encourage and adopt a data sharing environment that will help everyone do a better job and empower them with tools they need to make the right decisions.

When considering the advantages of data popularity and sharing, one must also consider that not all popular data will be high-quality data (and vice versa). So, there is definitely a need to provide both approaches in data analysis. Create a balance between data quality and data popularity to provide your organization and business users with the best of both worlds.

You may also wish to improve the context and understanding of data among business users by leveraging the IT curation approach to data and ‘watermarking’ (labeling/tagging) selected data to indicate that this data has been certified and is dependable. Business users can then achieve a better understanding of the credibility and integrity of the integrated data they view and analyze in the business intelligence dashboard and reports.

As the organization builds a portfolio of reports and shared data it can better assess the types of data, formats, analysis and reports that are popular among its users and will provide more value to the team and the enterprise.

Encourage your team members to share their views and ratings, with self-serve data preparation and BI tools and create an environment that will support power business users. While self-serve data prep may not always produce 100% quality data, it can provide valuable insight and food for thought that may prompt further exploration and analysis by an analyst or a full-blown Extract, Transform and Load (ETL) or Data Warehouse (DWH) inquiry and report.

There are many times when the data extracted and analyzed through self-serve data preparation is all you will need; times when the organization or the user or team needs solid information without a guarantee of 100% accuracy. In these times, the agility of self-serve data prep provides real value to the business because it allows your team to move forward, ask questions, make decisions, share information and remain competitive without waiting for valuable skilled resources to get around to the creating a report or performing a unique inquiry or search for data.

If you build a team of power business users, and transform your business user organization into Citizen Data Scientists, your ‘social network’ of data sharing and rating will evolve and provide a real benefit to the organization. Those ‘popular’, creative business users will emerge and other users will benefit from their unique approach to data analysis and gain additional insight This collaborative environment turns dry data analysis and tedious reporting into a dynamic tool that can be used to find the real ‘nuggets’ of information that will change your business.

When you need 100% accuracy – by all means seek out your IT staff, your data scientists and your analysts and leverage the skilled resources to get the crucial data you need. For much of your organization, your data analysis needs and your important tasks, the data and analysis gleaned from a self-serve data preparation and business intelligence solution will serve you very well, and your business users will become more valuable, knowledgeable assets to your organization.

By balancing agility and data ‘popularity’ and democratization with high quality, skilled data analysis, you can better leverage all of your resources and create an impressive, world-class business ‘social network’ to conquer the market and improve your business. To achieve a balance between data quality and data popularity, your organization may wish to create a unique index within the business intelligence analytics portal, to illustrate and balance data popularity and data quality, and thereby expand user understanding and improve and optimize analytics at every level within the enterprise.

Source Prolead brokers usa

unlocking e commerce growth for cpg with data and analytics
Unlocking e-commerce growth for CPG with data and analytics

CPG opportunities in the new normal
The COVID-19 pandemic has compelled businesses to shift to virtual marketplaces and CPG has been no different. Consumers are increasingly preferring online portals instead of brick-and-mortar stores and the time’s ripe for CPG companies to extend their digital reach. Some reports suggest that nearly 60% of consumers feared getting infected from visiting a physical store. This led to more than 50% ordering products online that they would otherwise normally purchase directly from the stores. As per the latest reports, the average spend per grocery order shot up to an all-time high of US$95 per order in August 2020, with the intent of repeat purchase monthly reaching a peak at 75% signifying that online shopping is all set to become one of the reigning CPG trends going forward.

Prior to the coronavirus pandemic e-commerce accounted for approximately 4% of all grocery sales, a tiny portion of the overall volume. But during the pandemic share of grocery spending going online has increased to as high as 20% according to Sigmoid analysis. The figure is expected to settle at about 10-12% by 2022. A boost in digital sales of essential goods and personal care products, which were purchased more frequently online during the pandemic, has driven CPG spend growth. Consequently, digital ad spending in the US consumer packaged goods (CPG) industry will increase 5.2% to $19.40 billion in 2020. With marketers relying on data to guide their digital advertising spends, ML-driven Multi Touch Attribution provides them with customer journey insights to optimize campaigns.

Boosting CPG data insights with e-commerce analytics
With the surge in online grocery shopping, copious amounts of user data is getting generated presenting online CPG businesses with unique opportunities. The utilization of e-commerce analytics will glean significant benefits and surely be a game-changer for CPG companies in a highly competitive market. In fact, more than half (52%) of the CPG respondents in a recent survey reported resources to react quicker and analyze faster. Another 7% of the responders predicted their analytics spending to reach 25% of their total IT expenses by 2023. Organizations are clearly inclined to bolster their data analytics initiatives. However, they also need to plan and execute their data strategy for CPG carefully.

CPG companies must invest more in analytics to align their strategies and business models with evolving consumer trends and requirements. The first step toward unearthing actionable data insights is to outline the data type to be considered. Usually, there’s no single set of data that is used across all business types. Data requirements vary along with the specific requirements of the industry, the market, or even the individual business entity. However, datasets can be broadly categorized into product-based data and consumer behavior data. Product based data includes tracking and logging product specific trends and statistics. Some product specific datasets are:

  1. Individual product sales trends
  2. Sales analysis of products within a category
  3. Distribution
  4. Price analytics

Customer behavior data points on the other hand, would include tracking and logging purchase behavior, preferences, and trends of online shoppers. Customer specific datasets are:

  1. Frequency of making purchases
  2. Cart abandonment to transaction completion analysis
  3. Brand/ Store loyalty
  4. Consumer demographics

Once the required data has been made available, the next step is to glean insights out of the available data. Specific analysis needs to be done keeping the end goal in mind. The data obtained can be utilized in various ways, such as:

Personalized marketing: This involves understanding consumer behavior to determine preferences and generate recommendations. Learn how Personalized recommendations driven by Advanced analytics improved customer experience and product sales for a popular cosmetic brand. As a future forward representation, a leading online retailer has patented a new feature that enables smart speakers to detect when a user is under the weather and generates recommendations accordingly including specific dietary choices from their pantry.

Order fulfilment: The surge in online CPG retails is redefining traditional order fulfilment process. With online retail, CPG players are now able to cater to a wider demographic as well as a larger geographic footprint while short term trends such as bulk buying behavior is also compelling them to mold their business approach. In this new business paradigm, they need to build on capabilities to capture data from Omni-channel sources and create data lakes to ingest and analyze data from disparate sources.

Product launches: Today CPG companies mostly depend upon retailers for consumer data generated from POS transactions and sales performance figures. In a new normal, the proliferation of online retail will generate significantly larger and substantially more diverse data streams which will provide the CPG companies with newer opportunities to leverage user data. This will help them redefine personalized recommendations with newer perspectives and offerings.

Category specific decision-making: CPG analytics output can objectively highlight strengths, weaknesses, inefficiencies, and opportunities rife within a particular product category giving granular visibility into each product type. Businesses that have successfully adopted data analytics enabled decision-making have seen up to 22% increase in demand for specific products.

What CPG firms require to build e-commerce strategy:
Data culture and automation: Culture of data as an asset, predictive analytics, and AI fully embedded in day-to-day operations and embraced by company leadership to swiftly address shifts in e-commerce demand, supply chain, and consumer preferences. Reduction in manual labor by automating processes across functionalities for demand forecasting.

Digital infrastructure: Connected data platforms, IT, and infrastructure that enable full visibility of the customer’s path to purchase and e-commerce dashboards that provide real-time insights into changes in demand. Prioritize customer-centricity across critical touchpoints to improve conversion rates and drive revenue growth.

Partnerships and ecosystem: Forge strategic alliances to establish ecosystems that differentiate customer services. CPGs partnering with 3 PLs and digital natives is a vital element in exploration of new revenue streams and operating models. Acquire or partner with digital specialists to contain costs by expediting and optimizing processes.

The need to bolster data engineering capabilities
In the current scheme of things, it’s a business imperative for CPG players to leverage data analytics to enable quicker yet informed decision making and achieve consistent business gains. But how do CPG organizations gain the most out of the consumer and product data? Building data engineering proficiency and the ability to collect and utilize comprehensive data pertaining to customer journey will become highly relevant for CPGs rather than only relying on external inputs. Data engineering is fundamental to achieving quantifiable gains from analytics. It can help CPG companies create interfaces and mechanisms that dictate the flow and access of data.

Building out the overall strategy for data engineering requires scoping out data needed to align with business objective and availability.

When it comes to building a solid data foundation for ecommerce, CPG companies should consider the following:

Building data pipelines: It is important to build a scalable data pipeline that can be queried at high speed and hosted in a cloud environment. This encompasses collecting data from different sources, storing the data including in data lakes and in-memory processing.

Data warehousing: Data pipelines collect data from multiple sources and store them in data warehouse in a structured format via ETL. This acts as a single source of truth simplifying the company’s analysis & reporting processes.

Data governance: Establishing processes for data availability, integrity, visibility to users, and security.

ML models in production: Integrating production ready machine learning models into the workflows can be the key to optimizing the available data and gaining significant business benefits.

Embrace AI/ML: Leverage predictive analytics and AI to improve financial metrics and the overall customer experience. Develop predictive analytics and AI use cases to transform processes, optimize operations, and enhance customer experiences.

Customer data platform: Enables brands to collect, unify, enrich and activate their customer data effectively. To manage and enhance the 1st party data that is required to better know and engage with consumers to drive increased margins and revenue.

Uncovering insights from granular user data and other data types such as transactional data, operations data, etc. will allow CPG companies to develop a more personalized approach to reaching and engaging with shoppers. More importantly, companies need to create an effective data strategy that is aligned with the business goals in order to derive value from the data while driving ROI.

New age e-commerce players have already challenged the status quo and successfully exhibited the perks of effectively leveraging data. Even though CPG companies have been investing in data analytics, they need to revamp their approaches to fit the current paradigm. CPG companies with data-driven, customer-centric strategies will gain more traction due to the demand for more personalized, convenient, and safe shopping experiences.

Advanced analytics can drive incremental revenue growth by up to 10% by helping companies launch new lines or modify products based on customer preferences. It can also improve profitability by 1% – 2% by helping companies optimize their manufacturing and supply chain processes.

Conclusion
Fundamental shifts in shopping and consumer attitudes have changed the grocery landscape forever. The CPG sector which is heavily dependent on what happens in grocery retail will have to adapt to the new models. E-commerce sales are accelerating as CPG firms focus on business sustainability and customer engagement. For a USD 635 billion-sized CPG business in 2019 with a 2% annual growth, if 10% share of total revenue is expected to come from ecommerce, that means a significant business opportunity for the future.

While CPGs have been conservative in leveraging emerging technologies due to the need for upfront investment, the pandemic is compelling them to rapidly adopt and integrate digital technologies. The ability to harness data around the rapidly shifting environment has become an important differentiator. CPG companies that move into action quickly to enhance their e-commerce capabilities and leverage data analytics to address consumer needs will emerge winners.

About the author
Jayant is Director of Marketing and Pre-sales at Sigmoid and is passionate about applying data & analytics to solve business problems. He has helped CPG and Retail companies globally to leverage IT for business transformation.

Source Prolead brokers usa

building a cyber physical grid for energy transition part 4 of 4
Building A Cyber-Physical Grid for Energy Transition (Part 4 of 4)

The new distributed energy market imposes new data and analytics architectures

Introduction

Part 1 provided a conceptual-level reference architecture of a traditional Data and Analytics (D&A) platform.

Part 2 provided a conceptual-level reference architecture of a modern D&A platform.

Part 3 highlighted the strategic objectives and described the business requirements of a TSO that modernizes its D&A platform as an essential step in the roadmap of implementing its cyber-physical grid for energy transition. It also described the toolset used to define the architecture requirements and develop the future state architecture of TSO’s D&A platform.

This part maps the business requirements described in part 3 into architectural requirements. It also describes the future state architecture and the implementation approach of the future state D&A platform.

TRANSPOWER Future state Architectural Requirements

In order to develop the future state architecture, the business requirements described in part 3 are first mapped into high-level architectural requirements. These architectural requirements represent the architectural building blocks that are missing or need to be improved in each domain of TRANSPOWER enterprise architecture in order to realize the future state architecture. Table 1 shows TRANSPOWER high-level architectural requirements.

Table 1: TRANSPOWER high-level architectural requirements

The Future State Architecture of TRANSPOWER Data and Analytics Platform

Figure 1 depicts the conceptual-level architecture of TRANSPOWER digital business platform.  Modernizing the existing D&A platform is one of the prerequisites for TRANSPOWER to build is digital business platform. Therefore, TRANSPOWER used the high-level architectural requirements shown in Table 1 and the modern data and analytics platform reference architecture described in Part 2 to develop the future state architecture of its D&A platform. Table 2 shows some examples of TRANSPOWER business requirements and their supporting digital business platform applications as well as the D&A platform architectural building blocks that support these applications. These D&A architectural building blocks are highlighted in red in Figure 2.

Figure 1. Conceptual-level architecture of TRANSPOWER digital business platform

Table 2: Examples of TRANSPOWER business requirements and their supporting digital business platform applications and D&A platform architectural building blocks

Figure 2: Examples of the new architectural building blocks

The Implementation Approach

After establishing the new human capital capabilities required for the implementation of the digital business transformation program, TRANSPOWER started to partner with relevant ecosystem players and deliver the program.

The implementation phase of the D&A platform modernization was based on the Unified Analytics Framework (UAF) described in Part 3. The new D&A applications and architectural building blocks described in Table 2 are planned and delivered using Part II of the UAF (including Inmon’s Seven Streams Approach).  According to Inmon’s Seven Streams Approach, stream 3 is the “driver stream” that sets the priorities for the other six streams, and the business discovery process should be driven by the “burning questions” that the business has put forward as its high-priority questions. These are questions for which the business community needs answers so that decisions can be made, and actions can be taken that will effectively put money on the company’s bottom line. Such questions can be grouped into topics, such as customer satisfaction, profitability, risk, and so forth. The information required to answer these questions is identified next. Finally, the data essential to manufacture the information that answers the burning questions (or even automate actions) is identified. It is worth noting that the Information Factory Development Stream is usually built topic by topic or application by application. Topics are usually grouped into applications and a topic contains several burning questions. Topic often spans multiple data subject areas.

Figure 3 depicts the relationship between Burning Questions, Applications, Topics Data Subject Areas, and the Information Factory Development Stream.

Figure 3. Relationship between Burning Questions, Applications, Topics Subject Areas, and the Information Factory Development Stream

 

Conclusion

In many cases, modernizing the traditional D&A platform is one of the essential steps an enterprise should take in order to build its digital business platform, therefore enabling its digital business transformation and gaining a sustainable competitive advantage.  This four-part series introduced a step by step approach and a toolkit that can be used to determine what parts of the existing traditional D&A capabilities are missing or need to be improved and modernized in order to build the enterprise digital business platform. The use of the approach and the toolkit was illustrated by an example of a power utility company, however, this approach and the toolkit can be easily adapted and used in other vertical industries such as Petroleum, Transportation, Mining, and so forth.

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA
error: Content is protected !!