Search for:
5 ways to power up your data science use in small business
5 Ways To Power-Up Your Data Science Use in Small Business

Ever thought about what would have been our world without data science? Many and many things would have been different. Understanding customers has been only possible in person; experience would have been the only critical factor to take new risks without knowing or predicting the ultimate outcomes. 

Thanks to AI and machine learning, they stand today: tracking and analyzing data can never be this easy without them. With a few clicks, one can generate very accurate data using various filters to differentiate the odds. 

Introduction

But the small businesses are always in the spotlight, from being the best in the locals to opening their branch in the big cities and continuing their legacy for which they are best known. Without the right set of data, they also suffer, bear a huge loss and even diminish. 

Without the right team and tools, sustaining in the highly competitive business world is highly sturdy. If you own a small business, you have to become more cautious and think about leveraging data science into your small business and scale it.

Having said that, here are five expert tips to power up data science use in your small business. Let’s dive in. 

Every business has its strategies, no matter how big or small they are: and that makes them stand unique in the market. With effective branding, advertisement, and customer experience, along with the quality of products they deliver, they establish their position in the market. That’s how one brand differs from the other.

The Data Science Strategies That You Need To Scale-up Your Small Business Are:

Hire A Data Scientist With 2 – 3 Years Of Experience (In Your Relevant Industry)

When you are a business, many employees work under you. Treat them so well that you let your employees become the voice of your brand to attract new and existing customers. Let’s say you run a SaaS startup; hire a data scientist who has a good 2-3 years of experience as a data scientist and has already been working in the SaaS industry. 

Then he always has a good understanding of data that your companies require; just let him know your objectives and goals, and he can help you in better ways. He can find and analyze new trends, get the customers’ preferences, and do many things. However, hiring can be costly when you have the best professional in your team. But if you feel you don’t have that much budget, upskill one of your employees, or work with a consultant who can guide you in the right direction. 

Using Right Set Of Data To Make Better Decisions

The right set of data matters a lot if you want your data to be accurate. And this dataset shall be packed with concrete evidence and statistics that you need for your business. For this purpose, data wrangling is necessary to differentiate the odd ones. 

Therefore the best ways to look into data are:

  • Collecting survey reports to identify products, services, and features. 
  • Conducting user surveys to find out how well they relate to your product.
  • When launching new products to understand how a product might perform in the market
  • Determining business threats and new opportunities 

Right Tools And Softwares That Makes Your Work Super-Easy

Gathering data and analyzing them is a humongous task. It can kill all your productivity manually and even give you a headache when your work is not over on time. And when you do manually, there are high chances your results won’t come accurate, and you might miss a slice of data for the same reason. 

Python and its libraries are an excellent tool for data science that can do a lot of work in minimum time. But having one data visualization tool either from Tableau or Power BI will help you understand unstructured data and make complex decisions easy-going. 

Thus, you master MYSQL, Excel,  Python, R, Tableau, Microsoft Azure, Apache Spark, Big Data, and Hadoop to get most of your work done. 

Identify And Target New Customers Having Existing Customers 

You’ll have many existing customers speaking about your business who love what you sell and come back to you for their next purchase. But what about new customers, how can you target them better, what they like most, and so many questions. 

From identifying where most of your customers come from, how they interact with your products, how your products can give a permanent solution to one of the problems. And best customer service wins you many new customers through word of mouth. 

The best way to get insight is by running ads for local and nearby places and diving into a google analytics dashboard that gives you a complete understanding of how your customers interact with the ads they see. Their location, area of interest, and much more. And you can get it from the marketing team, combine it with your data science, and produce a robust report.

Discover New Trends And Opportunities To Scale-up Your Business 

To be at the top of the business, you need to follow the ongoing trend and look for the opportunities that your competitors lag. When you fill those gaps, you build trust in your customers’ minds. 

As a data scientist, your primary work is to do research, come up with concrete ideas, and plan effectively. Suppose you want to sustain and be at the top of the business. When you do thorough research using advanced tools, you find better opportunities. Try them out to discover how they work for your company (necessarily a dry run at least) to collect your customers’ feedback. 

If it works, then great. If not, you can look for even better ideas. business is all about taking risks, but calculated ones (so it won’t affect you much.)

Final Words 

Taking new and calculated risks is a new approach to grow your business fast. But when you don’t research and invest, you face a significant loss, and it’s tough to recover. And if you run a small business, it’s not like your business can never go big. 

You can make it big, but the right strategies, mindset, and team will help you achieve the same. This blog taught about five best practices to power up your data science use in your small business. Let’s know your thoughts and how you would implement data science in your small business, and which one you find most helpful. 

Source Prolead brokers usa

three steps to addressing bias in machine learning
Three Steps to Addressing Bias in Machine Learning

Data is powering this century. There is an abundance of data coming from the digitized world, IoT devices, voice assistants like Alexa & Siri, fitness trackers, medical sensors to name a few. Data Science is becoming the center of growth hustling sectors like healthcare, logistics, customer service, banking, finance., etc. AI and Machine Learning are now mainstays in boardroom conversations and with this data-centricity also comes the big question around governance and ethics in data science. 

Step 1. Acknowledge Bias 

Are we ethically responsible for handling data?

Everyone is responsible for handling data with utmost care. Bypassing ethical data science just for monetary gain fosters bias and stereotyping. Similarly, cross-validating real-world data against the biased data results in an inconsiderate business decision reducing not just monetary gain but most importantly its reputation and customer loyalty. Every enterprise is responsible to grow its business by cultivating togetherness among communities by being more inclusive and filtering out any unconscious bias.

What are the effects of unethical data science?

Data Privacy is becoming a major concern with more and more machine learning models learning our digital footprint and predicting our future necessities whether we like it or not. Legislations like GDPR(Europe), Personal Data Protection Act(India), California Consumer Privacy Act (CCPA) stresses the importance of data privacy, protecting digital citizens from dangerous consequences of misused data.

Micro-targeting based on consumer data and demographics is influencing the action of the targeted consumer segments. With an abundance of data, it is becoming harder and harder to differentiate truth from falsehood. Micro-targeting without the proper understanding of data and its source leads to more harm than good.

Healthcare prediction failures, like IBM Watson, leads to irreversible consequences. Right now,  the healthcare industry is undergoing a major revolution with Artificial Intelligence. The success of AI in healthcare depends on a one-team approach with transparent discussion from a diverse set of leaders from both healthcare and data science.

Facial Recognition Softwares are known to falsely classifying people with criminal intent based on one’s skin color as the ML models are trained with predominantly white faces.  Multiple facial recognition applications are available in the market. But the success of the application depends on the diverse set of data used in training the facial recognition models.

Step II. Understand Bias

1. Know the Bias Types

It is very crucial to understand the different bias types and be conscious of their existence to handle data ethically. Bias in Machine Learning can be classified into Sample, Prejudice, Measurement, Algorithm, and,  Exclusion Bias

a. Sample Bias

Sample Bias arises from misinformed information where training data contains either partial information or incorrect information. For instance, predicting the spending activity of a customer based on their social feeds and not from relevant payment platforms leads to sample bias.

b. Prejudice Bias

“Our environment, the world in which we live and work is a mirror of our attitudes and expectations –Earl Nightingale

Being prejudice with preconceived opinion cause more harm not just to the business, but also to the society and well-being of our future mankind. It takes immense strength to acknowledge and eradicate any unconscious bias. 

c. Exclusion Bias

Everyone is unique with their own abilities and strength. Just because some of us do not follow the norm, are by no means subjectable to exclusion. Each one of us has our own unique qualities to contribute. Enterprises not adopting inclusive policies will be out of the market in a short time. 

d. Algorithm Bias

Machines do not understand bias. The erroneous assumptions often made when selecting the datasets and algorithms either consciously or unconsciously, lead to algorithm bias.

e. Measurement Bias

Measurement Bias usually happens when a model favors certain outcomes over others. A model predicting the sales target of consumer products that will double in the next quarter based on past sales history will favor items whose prices were marked low over others.

Step 3. Eliminate Bias 

Eliminating Bias is not a one-time activity, rather a continuous process. Bias elimination starts from selecting the right algorithm and setting the data governance team with all the members involved in the ML project lifecycle including the business team, data scientists, and MLOps team.

Models are less prejudiced if the test datasets are from the real world rather than from the sample set. Real-world data also offers the advantage of being diverse and inclusive in nature as the data is from real customers. But at the same time, including data from active customers alone will not solve the inclusion problem. Such unconscious bias can be detected by having Human-in-the-loop along with continuous monitoring. 

Summary

With data growing exponentially and legislation controlling data usage, it becomes crucial to exercise data consumption for common goodness. Fostering togetherness by collaborating with people from different sectors, being socially responsible and accountable for ethically using data will become the foundation for the successful AI revolution.

A version of this blog was originally published here – http://predera.com/reimagining-ai-building-togetherness-with-bias-m…

Source Prolead brokers usa

data ingestion best practices
Data Ingestion Best Practices

Data ingestion is required for organizations and businesses to make better decisions in their operations and provide better customer service. Businesses can understand the needs of their stakeholders, consumers, and partners through data ingestions, allowing them to stay competitive. Data ingestion is the most effective way for businesses to deal with tons of inaccurate and unreliable data.

How is data ingestion done?

It is performed in various ways. Top of these ways include;

  • Real-time  – Ingesting data in real-time is also known as streaming data.  It is the most crucial method of ingesting data, especially when the information is time-sensitive. In this method, data is retrieved, processed, and stored in real-time for real-time applications, such as decision making.

  • Batch – The batch approach entails shifting data at predetermined times. This method is excellent for recurring processes, such as reports that must be generated on a regular basis, such as daily.

  • Lambda Architecture – The lambda architecture is a method that combines real-time and batch procedures. This strategy combines the advantages of the two methods. It makes use of real-time ingestion to extract information from time-sensitive data. It also makes use of batch ingestion to provide a broad view of recurring data.

Best Practices:

Self-service data ingestion 

Many organizations have multiple data sources. All of this data must be ingested before it is stored and processed. Data continues to grow in size and metrics, requiring enterprises to continue to add the resources required to manage it. If the ingestion process is self-service, it relieves the pressure to constantly expand resources through methods such as automation, and the focus is now switched to processing and analysis. The ingestion process becomes very simple, requiring little to no assistance from technical personnel.

Automating the process 

As organizational data continues to grow, both in volume and complexity, manual techniques of handling and processing it can no longer be depended on. The need to automate every process along the way increases to see that you save time, reduce manual interventions, minimize system downtimes, and increase the productivity of the technical personnel.

Automating the ingestion process offers additional benefits including; architectural consistency, error management, consolidated management, and safety. These benefits come in handy to reduce the time taken to process data.

Anticipate challenges and planning appropriately

The imperative of any data analysis is to transform it into a usable format. As data continues to grow in volumes and type, so do the complexities of data analysis. When there is a process that can help you anticipate these challenges in advance, you will have an easier time completing the whole data processing task successfully. Data ingestion is one big process that helps you anticipate these challenges, plan accordingly in advance, and work on them efficiently as they come, without necessarily having to incur any loss of time and output.

Use of Artificial Intelligence

Making use of Artificial Intelligence concepts such as statistical algorithms and machine learning eliminates the need for manual interventions in the ingestion process. Manual intervention increases the number and frequency of errors in the process. Employing Artificial Intelligence not only eliminates these errors but also makes the whole process faster and increases the accuracy levels.

Data ingestion reduces the complexities involved in gathering data from multiple sources and frees up the time and resources for subsequent data processing steps. The emergence of data ingestion tools such as DQLabs has seen the creation of efficient options that can help businesses improve their performance and results by easing the decision-making process from their data.

Source Prolead brokers usa

solving the parsing dilemma
Solving the Parsing Dilemma

There’s a much maligned topic in web scraping – data parsing. Building scrapers would be a lot easier if the data presented through HTML wasn’t intended for browsers. However, that is the case, which means that the data extraction process has to go through several hoops before delivering results.

Parsing is part of the process. Unfortunately, it’s one of the most resource-intensive parts of the entire web scraping chain. In fact, developing a parser for a specific website is not enough. Maintaining it over time is required. Even then, that might not be the end as some complex websites might need numerous parsers to work the data out of the source.

The dilemma

Any sufficiently large scraping project has to develop their own parsers. That means dedicated time and resources to a, comparatively, low-skill task. Most of the time, developing and maintaining parsers is a task for junior developers.

However, junior developers are a highly valuable resource. Spending time maintaining and writing parsers usually barely improves their skills. In fact, it might even bring a certain level of annoyance.

On the other hand, parsing is a critical part of the scraping process. Most of the time, the data acquired is messy and unusable without intervention. Since the end goal of all web scraping, whether for personal or commercial use, is to provide data for analysis, parsing is a necessity.

In short, we have an essentially necessary process that takes up a significant portion of resources and time while not being significantly challenging or useful to the individual. In other words, it’s a resource sink. Solving such a challenge would free up a lot of highly skilled hands and brains to do greater work.

A look towards automation

If you were to approach any sensible CXO or businessperson in general with an idea to save significant time for developers, they would accept the suggestion with open arms. There’s rarely anything better than saving resources through automation.

However, automating parsing isn’t as simple as it may seem. Partly, the reason is the frequent maintenance required. Usually, the requirement arises because websites change their layouts. If they do so, the parser breaks.

Yet, predicting future layout and coding changes is simply impossible. Therefore, no rule-based approach is truly viable. Classical programming is of little help here. Manual work, as mentioned previously, is a huge time and resource sink.

There’s one option remaining that has built up a lot of hype over the past decade or so. That is machine learning applications. Parsing seems to be the perfect way to test the mettle of machine learning engineers.

Since all of HTML has a similar structure across certain categories of pages, the visual changes are decidedly small. Additionally, layout changes aren’t usually massive overhauls of an entire website. They’re mostly incremental UX and UI improvements that are implemented. While that may add to the annoyance of a developer, it’s a great candidate for a stochastic algorithm looking for similarities between trained data and new data.

Preparing for adaptive parsing

Before engaging into any machine learning project, at least these questions should be answered beforehand:

  1. What will be the limits of the model?
  2. What type of learning will be needed?
  3. What type (labeled/unlabeled) data will be used?
  4. How will the data be acquired?

Luckily, for our Adaptive Parser project at Oxylabs, we had the easiest answers to the last three questions. Since we already knew what we were looking at and for (data from specific pages), we could use labeled data. That meant supervised learning, one of the most practical and easy to execute models, can be used.

However, the true difficulty lies in answering the first question as the rest, at least partly, depend on it. Since all resources are finite, the machine learning model should be as narrow as required and as wide as possible. For us, it meant looking at how our clients are using our solutions (e.g. Real-Time Crawler) and making a decision based on data.

As we discovered through our research, e-commerce product pages were the most painful ones to parse. Generally, the source can be a bit wonky for parsing purposes. Additionally, there’s usually almost identical fields that are only sometimes available (e.g. “new price”/“old price”).

These fields can be confusing to machine learning models as well due to their similarity. However, answering the question about limits lets us set proper expectations for accuracy and the amount of data required. Clearly, we’ll need quite a bit of labeled data as we will have at least one problematic field.

Answering the final question was somewhat easier. We already knew where to pick up our examples. In fact, we could quite quickly collect a large amount of e-commerce pages. However, the strenuous part is labeling. It’s quite easy to get your hands on large amounts of unlabeled data. 

Labeling data and training

Every supervised learning dataset has to be labeled. In our case that meant providing labels for most fields in every e-commerce page and it had to be done at least partly manually. If it could be automated, someone would have already created an adaptive parser.

In order to save time and in-house resources, we took a two-pronged approach. First, we hired a few helping hands that would label fields from our soon-to-be training set. Second, we spent some time developing a GUI-based labeling application to speed up the process. The idea is simple – we spend more financial resources on manual repetitive tasks to save up time for cognitive tasks for our machine learning engineers.

After getting our hands on enough labeled data to start training our Adaptive Parser, the process is really a lot of trial and error with some strategizing peppered in between. Sometimes, the model will struggle with specific parts and some logic-based nudging will be required (or it will at least speed up the process).

Many months and hundreds of tests later, we have a solution that is able to automatically parse fields in e-commerce product pages, which can adapt to changes with reasonable accuracy. Of course, now maintenance will be the challenge, but we have shown that it’s possible to automate parsing.

Conclusion

Automating parsing in web scraping isn’t just about saving resources. It’s also about increasing the speed, efficiency, and accuracy of data over time. All of these factors influence the way businesses engage with external data. Primarily, there’s less time dedicated to working around the data and more time to working with data.

More discussions on the pressing topics around web scraping, industry trends and expert tips will be shared in an annual web scraping conference Oxycon. It will take place online on August 25-26th and the registration is free of charge.

Source Prolead brokers usa

android or ios which platform should you choose for developing your app
Android Or Ios – Which Platform Should You Choose For Developing Your App

Mobile application development is among the most consistently growing sectors in software production. There has been an increasing demand for fast and user-friendly apps in recent years. A certain statistic reveals that in the year 2020 alone it is calculated users spend an average of 87% of the total time online on mobile apps. When you are opting for any app development there are two platforms to choose from-Android and iOS. These two platforms are the leading platforms worldwide. Both iOS and Android have incredible development options. Before making the decision as to which platform you would choose to build your app you must thoroughly go through the comparison between the two. This article will give you an overview of both the platform’s perks and perils and also point out the differences. 

 After you have decided which platform you will go ahead with, the next step is to choose the developers. Whether you hire ios app developer or you hire an android app developer, make sure they are technically sound and have proper knowledge for your app development.

The Benefits and Drawbacks of iOS App Development

iOS app development is always high in demand. This is because the iOS apps all the time perform extremely well. The platform is very fast, the reliability factor is very high, very user-friendly, and very few bugs to be found in the final output of the developed app.  

An Experience that is sleek and flawless

The iOS platform provides the developers with detailed guidelines. So when you hire an ios app developer they get the detailed guidelines from the iOS platform. These detailed guidelines help the developers in the creation of a user interface. So when you hire an ios app developer they can easily create a user interface for the applications. Though this interface may sometimes be limited to a few. But on the other hand, this approach usually guarantees the security of an exceptional user experience.

The Drawbacks of iOS development

For native iOS app development, the developers require software like XCode which runs on Mac. So when you hire an ios app developer for app development for iOS Smartphones he/she will always need at least one piece of Apple technology.

Extra Demanding  requirements for App release

The Apple App Store is a bit extra demanding in comparison to Google Play Store. Even if your app did not break any rules as per the Apple store guidelines still your app can be rejected if your app is found to be not relevant or is found to be of less use. So when you hire iOS app developer make sure that he/she is well versed with the guidelines of the Apple Play Store  and the app he/she develops is well enough to be accepted on the Apple Play Store.

Less Options for Customization 

In a later stage after your app gets developed and published on Apple Play Store and you feel like customizing the app’s interface it gets restricted, so you lose the option of customizing your app. And also it would be difficult to add some new features if at any stage the app requires interaction with some other third-party software.

The Benefits and Drawbacks of Android App Development

Flexibility

Normally, Android has a less restricted environment than iOS. Regarding distribution, these applications run on any Android device. Also, the issues with hardware compatibility are not there. Thus the development process is much more flexible for Android.

Availability of huge and elaborate learning resources

The Android platform also allows a smoother development process by depending on the Java language. Java is an extremely versatile programming language that supports Windows, Linux, and Mac OS. This feature allows the developers to develop Android applications regardless of the operating system the machine is running on. Google provides a vast knowledge base for beginners, provides interactive materials, exercises whole training programs for the different levels of Android developers. 

Easy Publishing of  app

In regards to publishing the apps, Google is less lenient. Google allows the developers to post on the Google play store. Previously the review process was performed automatically within 7 hours, but now it takes up to a week for the new app developers. In spite of this new rule almost all the Android apps that are not violating the policies get approved. Moreover, the app developers pay a very small registration fee. 

The ability to go beyond the Smartphones

Developing any Android application means building software for the complete ecosystem of devices. Thus you can expand your app’s functionality. The app runs on Wear OS devices, Daydream and Cardboard VR headsets, Android Auto, and various other platforms. It also gives you the power to integrate your application into cars, smartwatches, TVs with mobile phones.

Issues with quality assurance

Fragmentation is very beneficial. It allows the developer to develop for different Android platforms simultaneously. But it makes the testing process very complicated. For the simplest of the apps, the app developers have to deliver fixes at frequent intervals. This is because the majority of users keep using the older version of the OS even after the upgrades. So hire an android app developer who is well equipped with this above feature.

Higher Cost and time requirement

Developing an android app is more time-consuming than an iOS app. Thus as the Android app takes more time in development and quality assurance the cost increases too.

Availability of more free apps affecting in-app purchases 

Android app users look for free apps, so they spend less on in-app purchases than an iOS app user does. So the return on investment is not always high.

Security issues

Android platform is an open-source platform, so there is always a chance of becoming a victim of cyber fraud. Whereas the iOS platform is a much more closed counterpart and cyber attacks are rare.

A breif comparison of  iOS vs. Android development 

Both Android app development and iOS are gaining popularity over time. Both have brighter future prospects for the next few years. Both the platforms will hold the market with the present strength and none is going to lose popularity.

If you are planning to develop an app that provides extensive additional content or a retail app that you may buy, then iOS will give you more opportunities for making a profit.

Android apps are more popular among users belonging to medical or technical fields. Whereas, iOS-based technology is popular among high-end business professionals, sales experts, and senior managers, and also the iOS built technology is preferred by the audience of higher household income along with the strive to keep up with the trends in technology.

Android development has greater global coverage; the audience is from Africa, Latin America, and parts of Europe. Whereas the iOS has the targeted audience is from Australia, North America, or Western Europe.

So which platform to choose first?

When you are deciding which platform to choose between iOS development or Android development, the following factors should be kept in account:

  • The location of the User
  • The budget you are willing to spend for the development of the app and also development time requirements.
  • You should also keep in account how much unique interface you want for the app development.

To Conclude 

Both Android operating systems and iOS operating systems dominate the market. Both have a good future prospect and outlook. Both the operating systems have an extremely large user in all existing fields.  It can be judiciously mentioned that to hire android app developer your application will solely be directed by the application you are developing and the future plans you have for it.

Source Prolead brokers usa

moving beyond the 9 to 5
Moving Beyond the 9-to-5

As the Pandemic wanes (more or less), the debate about going back to the office vs. continuing to work from home remains in full swing. Central to this debate is the question about whether it is, in fact, better for companies for people to work from an office than it is to work remotely. The answers to this can be wildly divergent, from those who believe that productive work can only be done in an office, where resources can be consolidated, and people can meet, face to face, with one another for collaboration, to those who see work better done when the workers essentially control their own schedules and workflows.

To that end, one of the fundamental questions in this debate is what, exactly it means to be productive. Productivity has been an integral part of the work environment for more than a hundred and twenty years, yet it is also something that is both poorly defined and quite frequently massively misused. To understand this, you have to go back to Frederick Taylor, who first defined many of the principles of the modern work environment around the turn of the Twentieth Century.

How Frederick Taylor Invented Productivity

 

Frederick Taylor, Genius or Con Man?

Taylor was an odd character to begin with. He was born to a fairly wealthy family and managed to get admitted to Harvard Law School, but due to deteriorating eyesight, he decided to go into mechanical engineering instead, working first as an apprentice and later master mechanic at Midvale Steel Works in Pennsylvania, eventually marrying the daughter of the president of the company while working his way up from the shop floor to sales and eventually to management.

From there, Frederick Taylor began putting together his own observations about how inefficient the production lines were, and how there needed to be more disciplined in measuring productivity, which at the time could be measured as the number of components that a person could produce in a given period of time. In 1911, he wrote a monograph on the subject called The Principles of Scientific Management, which generalized these observations from the steel mill to all companies.

Taylor’s work quickly found favor in companies throughout the United States, where his advocacy of business analytics, precision time-keeping, and performance reviews seemed to resonate especially well in the emerging industrial centers of the country. At the same time, the data that he gathered was often highly suspect – for instance, he would frequently use the measurements of the fastest or strongest workers as the baseline for all of his measurements, then would recommend that owners dock the pay of workers that couldn’t reach these levels. He also mastered the art of business consulting, pioneering many of the techniques that such consultants would use to sell themselves into companies decades later.

Productivity was one of his inventions as well, and it eventually became the touchstone of corporations globally – a worker’s output could be measured by his or her productivity: the number of goods they produced in a given period of time. However, even this measure was somewhat deceptive, first, because this measure was at least in part determined by the automation inherent within an assembly line, and in part assumed that production of widgets was the only meaningful measurement in a society that was even then shifting from agricultural to industrial, while other factors such as quality or complexity of the products, physical or mental states of the workers, or even stability of the production line were ignored entirely.


Do Not Fold, Spindle or Mutilate

Productivity In The Computer Age

Automation actually made a hash of productivity early on. An early bottling operation for beer usually involved manually filling a bottle, then stoppering it. A skilled worker could get perhaps a dozen such bottles out a minute and could sustain that for an hour or so before needing to take a break. By the 1950s, automation had improved to the extent that a machine could fill and stopper 10,000 bottles a minute, a thousandfold increase in productivity. The bottler at that point was no longer performing the manual labor, but simply ensuring that the machine didn’t break down, that the empty bottles were positioned in their lattice, and that the filled ones were boxed and ready for shipment. Timing the bottler for filling bottles no longer made any sense but still, the metric persisted.

Not surprisingly, corporations quickly adopted Taylorism for their own internal processes. People became measured by how many insurance claims they could process, despite the fact that an insurance claim required a decision, which meant understanding the complexity of a problem. Getting more insurance claims processed may have made the business run faster, but it did so at the cost of making poorer decisions. It would take the rise of computer automation and the dubious benefits of artificial specialized intelligence to get to the point where semi-reasonable decisions could be made far faster, though the jury is still out as to whether the AI is in fact any better at making the decisions than humans.

Similar productivity issues arise with intellectual property. In the Tayloresque world, Ernest Hemingway was terribly unproductive. He only wrote about twenty books over his forty years of being a professional writer, or one book every two years. Today, he could probably write a book a year, simply because revising manuscripts is far easier with a word processor than a typewriter, but the time-consuming part of writing a book – actually figuring out what words go into making up that book – will take up just as long.

Even in the world of process engineering, in most cases what computers have done is to reduce the number of separate people handling different parts of the process, often down to one. Forty years ago, putting together a slide presentation was a fairly massive undertaking that required graphic artists, designers, photographers, copywriters, typographers, printers, and so forth weeks. Today, a ten-year-old kid can put together a Powerpoint deck that would have been impossible for anyone to produce earlier without a half-million-dollar budget.

We are getting closer to that number being zero: fill in some parameters, select a theme, push a button, and *blam* your presentation is done. This means, of course, that there are far more presentations out there than anyone would ever be able to consume. and that the bar for creating good, eye-catching, memorable presentations becomes far, far higher. It also means that Tayloresque measurements of productivity very quickly become meaningless when measured in presentations completed per week.

That’s the side usually left out in talking about productivity. Productivity is a measure of efficiency and efficiency is a form of optimization. Optimizations reach a point of diminishing returns, where more effort results in less meaningful gains. That’s a big part of the reason that productivity took such a nosedive after the turn of the twenty-first century. Even with significantly faster computers and algorithms, the reality was that the processes that could be optimized had already been so tweaked that the biggest factor in performance gains came right back down to the humans, which hadn’t really been changed all that much in the last century.

A forum that I follow posed the question about whether it was better for one’s career to work in the office or work from home. A person made the comment that people who work remotely may get passed over for promotion compared to someone who comes in early and stays late because the managers don’t see how hard working the remote worker is compared to the office worker. This is a valid concern, but it brings back a memory of when I started working a few decades ago and found myself working ten and eleven-hour days at the office for weeks on end trying to hit a critical deadline. Eventually, I was stumbling in exhausted, and the quality of my work diminished dramatically. I was essentially also giving my employer three additional hours a day at no cost, though after a while, they were getting what they paid for.

Knowledge work, which I and a growing number of people do, involves creating intellectual property. Typically, this involves identifying structure, building, testing, and integrating virtual components. It is easy to tell at a glance how productive I am, both in terms of ascertaining quantity (look at the software listings or article page) as well as quality (see if it correctly passes a build process or read the process). This is true for most activities performed today. If there are questions, I can be reached by email or phone or SMS or Slack or Teams or Zoom or any of a dozen other ways. With most DevOps and continuous integration processes, a manager can look at a dashboard and literally see what I have worked on within the last few minutes.

In other words, regardless of whether you are working remotely vs. working in the office, there are ample tools that a manager has to be able to ascertain whether a worker is on track to accomplish what they have pledged to accomplish. This is an example of goal-oriented management, and quite frankly it is exactly how most successful businesses should be operating today.


The Paycheck Was Never Meant To Measure Time

The Fallacy of the Paycheck and the Time Clock

So let’s talk a little bit about things from the perspective of being a manager. If you have never done it before, managing a remote workforce is scary. Most management training historically has focused on people skills – reading body language, setting boundaries, identifying slackers, dealing with personal crises, and most importantly, keeping the project that you are managing moving forward. Much of it is synthesizing information from others into a clean report, typically by asking people what they are working on, and some of it is delegating tasks and responsibilities. In this kind of world, there is a clear hierarchy, and you generally can account for the fact that your employees are not stealing time or resources from you because you watch them.

I’ll address most of this below, but I want to focus on the last, italicized statement first because it gets into what is so wrong about contemporary corporate culture. One place where Tayloresque thinking embedded itself most deeply into the cultural fabric of companies is the notion that you are paying your employees for their time. This assumption is almost never questioned. It should be.

Until the start of the middle of the Industrial Age, people typically were paid monthly or fortnightly if they were the employee of a member of the nobility or gentry, or produced and sold their goods if they were craftsmen or farmers, or were budgeted an account if they were a senior member of the church. Often times such payment partially took the form of room and board (or food) or similar services in exchange. Timekeeping seldom entered it – you worked when there was work to be done and rested when the opportunity arose.

Industrialization brought with it more precise clocks and timekeeping, and you were paid for the time that you worked, but because of the sheer number of workers involved, this also required better sets of accounting books and more regular disbursement of funds for payment. It was Taylor that quantized this down to the hour, however, with the natural assumption that you were being paid not per day of work but for ten hours of work a day. This was also when the term work ethic seemed to gain currency, the idea being that a good worker worked continuously, never complained, never asked for too much, and bad workers were lazy and would steal both resources and time from employers if they could get away with it.

In reality, most work is not continuous in nature but can be broken down into individual asynchronous tasks of activity within a queue. It can be made continuous if the queue is left unattended too long or if the time to complete a task increases faster than the rate at which tasks are added to the queue. Office work, from the 1930s to the 1970s, usually involved a staff of workers (mainly female) who worked in pools to process applications, invoices, correspondence, or other content – when a pool worker was done, she would be assigned a new project to complete. This queue and pool arrangement basically kept everyone busy, further cementing the idea that an employer was actually paying for the employee’s time, especially since there was usually enough work to fill the available hours of the day.

That balance shifted in the 1970s and 80s as the impact of automation began to hit corporations hard. The secretarial pool had all but disappeared by 1990 with the advent of computers and networking. While productivity shot up – fewer people were doing much more “work” in the sense that automation enabled far more processing – people began to find themselves with less and less to do and made it possible for companies to eliminate or consolidate existing jobs. A new generation picked up programming and related skills, and the number of companies exploded in the 1990s as entrepreneurs looked for new niches to automate as the barrier to entry for new companies dropped dramatically.


By focusing on demonstrable goals rather than “seat-time”, organizations can become more data-oriented.

The WFA Revolution Depends Upon Goals and Metrics

Since 2000, there have been three key events that have dramatically changed the landscape for work. The first was the rise of mobile computing, which has made it possible for people to work anywhere there is a network signal. The second was the consolidation of cloud computing, moving away from the requirement that resources need to be on the premises. Finally, the pandemic stress-tested the idea of work virtualization in a way that nothing else could have and likely forcing the social adoption of remote work about a decade earlier than it would have taken otherwise.

Productivity through automation has now reached a stage where it is possible to

  1. get reliable metrics based upon work completed towards specific goals, regardless of time specifically spent,
  2. automate those tasks which do not in fact require more than minimal human intervention,
  3. get access to resources needed to accomplish specific tasks, regardless of where those tasks are accomplished
  4. provide a superior environment for meeting virtually across multiple time zones, creating both a video and transcript artifact of such meetings,
  5. provide tools for collaborating in the same way, either synchronously or asynchronously (addressing the water cooler problem)
  6. ensure that information remains secure
  7. provide a set of eyeballs on evolving situations anywhere in the world at any time

Put another way – remote force productivity is not the issue here.

Most people are far more productive than they have ever been, to the extent that it is becoming harder and harder to fill a forty-hour week most of the time. I’d argue that when an employer is paying an employee, what they should be doing is spreading out a year-long payment into twenty-six chunks, paying not for the time spent but the availability of the expertise. That the workweek is twenty hours one week and fifty hours next is irrelevant – you are paying a salary, and the actual number of hours worked is far less important than whether in fact the work is being done consistently and to a sufficiently high standard. This was true before the pandemic, and if anything it is more true today.

Businesses began in the 1970s to start pushing labor laws so that companies could classify part-time workers as hourly – this meant that, rather than having a minimum guaranteed total annual income, such workers were only paid for their time on-premises. By doing so, such workers (who were also usually paid at or even below a minimum wage), would typically be the ones to bear the brunt if a business had a slow week, but were also typically responsible for their own healthcare and were ineligible for other benefits. In this way, even if on paper they were making $30,000 a year, in reality, such workers’ actual income was likely half that, even before taxes. By 1980, labor laws had effectively institutionalized legalized poverty.

After the pandemic, companies discovered, much to their chagrin, that their rapid shedding of jobs in 2020 came back to bite them hard in 2021. Once people have a job, they develop a certain degree of inertia in looking for a new job, and often times may refuse to look for other work simply because switching jobs is always somewhat traumatic. This also tends to depress wage growth in companies, because most companies will only pay a person more (and even then only to a specific minimum) if they also take on more responsibility (in other words, new hires generally make more than existing workers for the same positions).

At the bottom of the pandemic bust, more than 25 million people were thrown out of work, deeper even than during the Great Depression. The rebound was fairly strong, however. It meant that suddenly every company that had jettisoned workers was now trying to rehire new workers all at once. For the first time in a generation, labor had newfound bargaining strength. This also coincided with a long-overdue generational retirement of the Boomers and the subsequent falloff in the number of GenXers, which overall is about 35% smaller than the previous generation. Demographic trends hint that the labor market is going to favor employees over employers for at least the next decade.

Given all that, it’s time to rethink productivity in the Work From Home era. The first part of this is to understand that work has become asynchronous, and ironically, it’s healthier that way. There will be periods of time when employees will be idle, and others where employees will be very busy. Most small businesses implicitly understand this – restaurants (and indeed, most service economy jobs) have slack times and busy times. Perhaps it is time for “hourly” workers to go back to being paid salaries again. This way, if someone is not needed on-site at a particular, sending them home doesn’t become an economic burden for them. On the flip-side, that also puts the onus on the worker that, should things get busy again, they remain reachable in one of any number of ways.

Once you move into the knowledge economy, the avenues for workers become more open. Salary holds once more here, but so too does the notion of being available at certain times. I’ve actually seen an uptick in the number of startup companies that utilize Slack as a way of managing workflow, even in service sector work, as well as indicating when people need to be in the office versus simply need to be working on projects.

I am also seeing the emergence of a 3-2-2 week: three days that are specifically set aside for meetings, either onsite or over telepresence channels such as Zoom or Teams, two days where people may be on call but generally don’t have to meet and can focus on getting the most productivity without meetings interrupting their concentration, and two days that are considered “the weekend”. When workloads are light (such as during summers or winter holidays), this can translate into “light” vacations, where people are just putting in a couple of hours of work a day during their “Fridays” and are otherwise able to control their schedules. When workloads are heavy (crunch time) the bleed even into the weekend CAN happen, so long as it’s not done for an extended period of time.

Asynchronous, goal-oriented, and demonstrable project planning also becomes more critical in the Work From Home era. This, ironically, means that “scrum”-oriented practices should be deprecated in favor of being able to attach work products (in progress or completed) to workflows – whether that’s updating a Git repository, publishing a blog, updating a reference standard, or designing media or programmatic components. Continuous integration is key here – use DevOps processes to ensure that code and resources are representative of the current state of the project and that provide a tracking log of what has been done by each member of a given team.


Micromanagement, abusive behavior, and political games – is it any wonder people are staying away from the office?

For production teams, this should be old-hat, but it’s fairly incumbent that management works in the same way, and ironically, this is where the greatest resistance is likely going to come from. Traditional management has typically been more face-to-face in interaction (in part because senior management has also traditionally been more sales-oriented). The more senior the position, the more likely that person will need comprehensive real-time reporting, and the more difficult (and important) it is to summarize the results from multiple divisions/departments.

Not surprisingly, this is perhaps the single biggest benefit of a data-focused organization with strong analytics: It makes it easier for managers to see in the aggregate what is happening within an organization. It also makes it easier to see who is being productive, who is needing help, and who, frankly, need to be left behind, which include more than a few of those same managers.

https://www.theatlantic.com/ideas/archive/2021/07/work-from-home-be…

You cannot talk about productivity without also talking about non-productivity. This doesn’t come from people who are genuinely trying but are struggling due to a lack of resources, training, or experience. One thing that many of these same tools can do is to better highlight who those people are without putting them on the spot, and a good manager will then be able to either assign a mentor or make sure they do have the training.

Rather, it’s those workers who have managed to find a niche within the organization where they don’t actually do much that’s productive, but they seem to constantly be busy. Work from home may seem to be ideal here, but if you assume that this also involves goal-oriented metrics, it actually becomes harder to “skate” when working remotely, as there is a requirement for having a demonstrable product at the end of the day.

Finally, one of the biggest productivity problems with WFH/WFA has to do with micromanagement as compensation for being unable to “watch” people at work. This involves (almost physically) tying people to their keyboards or phones, monitoring everything that is done or said, and then using lack of “compliance” as an excuse to penalize workers.

During the worst of the pandemic, stories emerged of companies doing precisely this. Not surprisingly, those companies found themselves struggling to find workers as the economy started to recover, especially since many of these companies had a history of underpaying their workers as well. Offices tend to create bubble effects – people are less likely to think about leaving when they are in a corporate cocoon than when they are working from home, and behavior that might be prevalent within offices – gas-lighting, sexual harassment, bullying, overt racism, bosses not crediting their workers, and so forth – can be seen more readily when working away from the office as being unacceptable than they can when within the bubble.

There are multiple issues involved with WFH/WFA that do come into play, some legitimate. However, making the argument that productivity is the reason that companies want workers to come back to the office is at best specious. While it is likely more work for managers, a hybrid solution where the office essentially becomes a place where workers congregate when they do need to gather (and those times certainly exist) likely is baked into the cake by now especially as the Covid Delta variant continues to rage in the background. It’s time to move beyond Taylorism, and the fallacy of the time clock.

Kurt Cagle is the Community Editor of Data Science Central, a TechTarget property.

Source Prolead brokers usa

defining productivity in the work from home era
Defining Productivity In The Work-From-Home Era

As the Pandemic wanes (more or less), the debate about going back to the office vs. continuing to work from home remains in full swing. Central to this debate is the question about whether it is, in fact, better for companies for people to work from an office than it is to work remotely. The answers to this can be wildly divergent, from those who believe that productive work can only be done in an office, where resources can be consolidated, and people can meet, face to face, with one another for collaboration, to those who see work better done when the workers essentially control their own schedules and workflows.

To that end, one of the fundamental questions in this debate is what, exactly it means to be productive. Productivity has been an integral part of the work environment for more than a hundred and twenty years, yet it is also something that is both poorly defined and quite frequently massively misused. To understand this, you have to go back to Frederick Taylor, who first defined many of the principles of the modern work environment around the turn of the Twentieth Century.

How Frederick Taylor Invented Productivity

 

Frederick Taylor, Genius or Con Man?

Taylor was an odd character to begin with. He was born to a fairly wealthy family and managed to get admitted to Harvard Law School, but due to deteriorating eyesight, he decided to go into mechanical engineering instead, working first as an apprentice and later master mechanic at Midvale Steel Works in Pennsylvania, eventually marrying the daughter of the president of the company while working his way up from the shop floor to sales and eventually to management.

From there, Frederick Taylor began putting together his own observations about how inefficient the production lines were, and how there needed to be more disciplined in measuring productivity, which at the time could be measured as the number of components that a person could produce in a given period of time. In 1911, he wrote a monograph on the subject called The Principles of Scientific Management, which generalized these observations from the steel mill to all companies.

Taylor’s work quickly found favor in companies throughout the United States, where his advocacy of business analytics, precision time-keeping, and performance reviews seemed to resonate especially well in the emerging industrial centers of the country. At the same time, the data that he gathered was often highly suspect – for instance, he would frequently use the measurements of the fastest or strongest workers as the baseline for all of his measurements, then would recommend that owners dock the pay of workers that couldn’t reach these levels. He also mastered the art of business consulting, pioneering many of the techniques that such consultants would use to sell themselves into companies decades later.

Productivity was one of his inventions as well, and it eventually became the touchstone of corporations globally – a worker’s output could be measured by his or her productivity: the number of goods they produced in a given period of time. However, even this measure was somewhat deceptive, first, because this measure was at least in part determined by the automation inherent within an assembly line, and in part assumed that production of widgets was the only meaningful measurement in a society that was even then shifting from agricultural to industrial, while other factors such as quality or complexity of the products, physical or mental states of the workers, or even stability of the production line were ignored entirely.


Do Not Fold, Spindle or Mutilate

Productivity In The Computer Age

Automation actually made a hash of productivity early on. An early bottling operation for beer usually involved manually filling a bottle, then stoppering it. A skilled worker could get perhaps a dozen such bottles out a minute and could sustain that for an hour or so before needing to take a break. By the 1950s, automation had improved to the extent that a machine could fill and stopper 10,000 bottles a minute, a thousandfold increase in productivity. The bottler at that point was no longer performing the manual labor, but simply ensuring that the machine didn’t break down, that the empty bottles were positioned in their lattice, and that the filled ones were boxed and ready for shipment. Timing the bottler for filling bottles no longer made any sense but still, the metric persisted.

Not surprisingly, corporations quickly adopted Taylorism for their own internal processes. People became measured by how many insurance claims they could process, despite the fact that an insurance claim required a decision, which meant understanding the complexity of a problem. Getting more insurance claims processed may have made the business run faster, but it did so at the cost of making poorer decisions. It would take the rise of computer automation and the dubious benefits of artificial specialized intelligence to get to the point where semi-reasonable decisions could be made far faster, though the jury is still out as to whether the AI is in fact any better at making the decisions than humans.

Similar productivity issues arise with intellectual property. In the Tayloresque world, Ernest Hemingway was terribly unproductive. He only wrote about twenty books over his forty years of being a professional writer, or one book every two years. Today, he could probably write a book a year, simply because revising manuscripts is far easier with a word processor than a typewriter, but the time-consuming part of writing a book – actually figuring out what words go into making up that book – will take up just as long.

Even in the world of process engineering, in most cases what computers have done is to reduce the number of separate people handling different parts of the process, often down to one. Forty years ago, putting together a slide presentation was a fairly massive undertaking that required graphic artists, designers, photographers, copywriters, typographers, printers, and so forth weeks. Today, a ten-year-old kid can put together a Powerpoint deck that would have been impossible for anyone to produce earlier without a half-million-dollar budget.

We are getting closer to that number being zero: fill in some parameters, select a theme, push a button, and *blam* your presentation is done. This means, of course, that there are far more presentations out there than anyone would ever be able to consume. and that the bar for creating good, eye-catching, memorable presentations becomes far, far higher. It also means that Tayloresque measurements of productivity very quickly become meaningless when measured in presentations completed per week.

That’s the side usually left out in talking about productivity. Productivity is a measure of efficiency and efficiency is a form of optimization. Optimizations reach a point of diminishing returns, where more effort results in less meaningful gains. That’s a big part of the reason that productivity took such a nosedive after the turn of the twenty-first century. Even with significantly faster computers and algorithms, the reality was that the processes that could be optimized had already been so tweaked that the biggest factor in performance gains came right back down to the humans, which hadn’t really been changed all that much in the last century.

A forum that I follow posed the question about whether it was better for one’s career to work in the office or work from home. A person made the comment that people who work remotely may get passed over for promotion compared to someone who comes in early and stays late because the managers don’t see how hard working the remote worker is compared to the office worker. This is a valid concern, but it brings back a memory of when I started working a few decades ago and found myself working ten and eleven-hour days at the office for weeks on end trying to hit a critical deadline. Eventually, I was stumbling in exhausted, and the quality of my work diminished dramatically. I was essentially also giving my employer three additional hours a day at no cost, though after a while, they were getting what they paid for.

Knowledge work, which I and a growing number of people do, involves creating intellectual property. Typically, this involves identifying structure, building, testing, and integrating virtual components. It is easy to tell at a glance how productive I am, both in terms of ascertaining quantity (look at the software listings or article page) as well as quality (see if it correctly passes a build process or read the process). This is true for most activities performed today. If there are questions, I can be reached by email or phone or SMS or Slack or Teams or Zoom or any of a dozen other ways. With most DevOps and continuous integration processes, a manager can look at a dashboard and literally see what I have worked on within the last few minutes.

In other words, regardless of whether you are working remotely vs. working in the office, there are ample tools that a manager has to be able to ascertain whether a worker is on track to accomplish what they have pledged to accomplish. This is an example of goal-oriented management, and quite frankly it is exactly how most successful businesses should be operating today.


The Paycheck Was Never Meant To Measure Time

The Fallacy of the Paycheck and the Time Clock

So let’s talk a little bit about things from the perspective of being a manager. If you have never done it before, managing a remote workforce is scary. Most management training historically has focused on people skills – reading body language, setting boundaries, identifying slackers, dealing with personal crises, and most importantly, keeping the project that you are managing moving forward. Much of it is synthesizing information from others into a clean report, typically by asking people what they are working on, and some of it is delegating tasks and responsibilities. In this kind of world, there is a clear hierarchy, and you generally can account for the fact that your employees are not stealing time or resources from you because you watch them.

I’ll address most of this below, but I want to focus on the last, italicized statement first because it gets into what is so wrong about contemporary corporate culture. One place where Tayloresque thinking embedded itself most deeply into the cultural fabric of companies is the notion that you are paying your employees for their time. This assumption is almost never questioned. It should be.

Until the start of the middle of the Industrial Age, people typically were paid monthly or fortnightly if they were the employee of a member of the nobility or gentry, or produced and sold their goods if they were craftsmen or farmers, or were budgeted an account if they were a senior member of the church. Often times such payment partially took the form of room and board (or food) or similar services in exchange. Timekeeping seldom entered it – you worked when there was work to be done and rested when the opportunity arose.

Industrialization brought with it more precise clocks and timekeeping, and you were paid for the time that you worked, but because of the sheer number of workers involved, this also required better sets of accounting books and more regular disbursement of funds for payment. It was Taylor that quantized this down to the hour, however, with the natural assumption that you were being paid not per day of work but for ten hours of work a day. This was also when the term work ethic seemed to gain currency, the idea being that a good worker worked continuously, never complained, never asked for too much, and bad workers were lazy and would steal both resources and time from employers if they could get away with it.

In reality, most work is not continuous in nature but can be broken down into individual asynchronous tasks of activity within a queue. It can be made continuous if the queue is left unattended too long or if the time to complete a task increases faster than the rate at which tasks are added to the queue. Office work, from the 1930s to the 1970s, usually involved a staff of workers (mainly female) who worked in pools to process applications, invoices, correspondence, or other content – when a pool worker was done, she would be assigned a new project to complete. This queue and pool arrangement basically kept everyone busy, further cementing the idea that an employer was actually paying for the employee’s time, especially since there was usually enough work to fill the available hours of the day.

That balance shifted in the 1970s and 80s as the impact of automation began to hit corporations hard. The secretarial pool had all but disappeared by 1990 with the advent of computers and networking. While productivity shot up – fewer people were doing much more “work” in the sense that automation enabled far more processing – people began to find themselves with less and less to do and made it possible for companies to eliminate or consolidate existing jobs. A new generation picked up programming and related skills, and the number of companies exploded in the 1990s as entrepreneurs looked for new niches to automate as the barrier to entry for new companies dropped dramatically.


By focusing on demonstrable goals rather than “seat-time”, organizations can become more data-oriented.

The WFA Revolution Depends Upon Goals and Metrics

Since 2000, there have been three key events that have dramatically changed the landscape for work. The first was the rise of mobile computing, which has made it possible for people to work anywhere there is a network signal. The second was the consolidation of cloud computing, moving away from the requirement that resources need to be on the premises. Finally, the pandemic stress-tested the idea of work virtualization in a way that nothing else could have and likely forcing the social adoption of remote work about a decade earlier than it would have taken otherwise.

Productivity through automation has now reached a stage where it is possible to

  1. get reliable metrics based upon work completed towards specific goals, regardless of time specifically spent,
  2. automate those tasks which do not in fact require more than minimal human intervention,
  3. get access to resources needed to accomplish specific tasks, regardless of where those tasks are accomplished
  4. provide a superior environment for meeting virtually across multiple time zones, creating both a video and transcript artifact of such meetings,
  5. provide tools for collaborating in the same way, either synchronously or asynchronously (addressing the water cooler problem)
  6. ensure that information remains secure
  7. provide a set of eyeballs on evolving situations anywhere in the world at any time

Put another way – remote force productivity is not the issue here.

Most people are far more productive than they have ever been, to the extent that it is becoming harder and harder to fill a forty-hour week most of the time. I’d argue that when an employer is paying an employee, what they should be doing is spreading out a year-long payment into twenty-six chunks, paying not for the time spent but the availability of the expertise. That the workweek is twenty hours one week and fifty hours next is irrelevant – you are paying a salary, and the actual number of hours worked is far less important than whether in fact the work is being done consistently and to a sufficiently high standard. This was true before the pandemic, and if anything it is more true today.

Businesses began in the 1970s to start pushing labor laws so that companies could classify part-time workers as hourly – this meant that, rather than having a minimum guaranteed total annual income, such workers were only paid for their time on-premises. By doing so, such workers (who were also usually paid at or even below a minimum wage), would typically be the ones to bear the brunt if a business had a slow week, but were also typically responsible for their own healthcare and were ineligible for other benefits. In this way, even if on paper they were making $30,000 a year, in reality, such workers’ actual income was likely half that, even before taxes. By 1980, labor laws had effectively institutionalized legalized poverty.

After the pandemic, companies discovered, much to their chagrin, that their rapid shedding of jobs in 2020 came back to bite them hard in 2021. Once people have a job, they develop a certain degree of inertia in looking for a new job, and often times may refuse to look for other work simply because switching jobs is always somewhat traumatic. This also tends to depress wage growth in companies, because most companies will only pay a person more (and even then only to a specific minimum) if they also take on more responsibility (in other words, new hires generally make more than existing workers for the same positions).

At the bottom of the pandemic bust, more than 25 million people were thrown out of work, deeper even than during the Great Depression. The rebound was fairly strong, however. It meant that suddenly every company that had jettisoned workers was now trying to rehire new workers all at once. For the first time in a generation, labor had newfound bargaining strength. This also coincided with a long-overdue generational retirement of the Boomers and the subsequent falloff in the number of GenXers, which overall is about 35% smaller than the previous generation. Demographic trends hint that the labor market is going to favor employees over employers for at least the next decade.

Given all that, it’s time to rethink productivity in the Work From Home era. The first part of this is to understand that work has become asynchronous, and ironically, it’s healthier that way. There will be periods of time when employees will be idle, and others where employees will be very busy. Most small businesses implicitly understand this – restaurants (and indeed, most service economy jobs) have slack times and busy times. Perhaps it is time for “hourly” workers to go back to being paid salaries again. This way, if someone is not needed on-site at a particular, sending them home doesn’t become an economic burden for them. On the flip-side, that also puts the onus on the worker that, should things get busy again, they remain reachable in one of any number of ways.

Once you move into the knowledge economy, the avenues for workers become more open. Salary holds once more here, but so too does the notion of being available at certain times. I’ve actually seen an uptick in the number of startup companies that utilize Slack as a way of managing workflow, even in service sector work, as well as indicating when people need to be in the office versus simply need to be working on projects.

I am also seeing the emergence of a 3-2-2 week: three days that are specifically set aside for meetings, either onsite or over telepresence channels such as Zoom or Teams, two days where people may be on call but generally don’t have to meet and can focus on getting the most productivity without meetings interrupting their concentration, and two days that are considered “the weekend”. When workloads are light (such as during summers or winter holidays), this can translate into “light” vacations, where people are just putting in a couple of hours of work a day during their “Fridays” and are otherwise able to control their schedules. When workloads are heavy (crunch time) the bleed even into the weekend CAN happen, so long as it’s not done for an extended period of time.

Asynchronous, goal-oriented, and demonstrable project planning also becomes more critical in the Work From Home era. This, ironically, means that “scrum”-oriented practices should be deprecated in favor of being able to attach work products (in progress or completed) to workflows – whether that’s updating a Git repository, publishing a blog, updating a reference standard, or designing media or programmatic components. Continuous integration is key here – use DevOps processes to ensure that code and resources are representative of the current state of the project and that provide a tracking log of what has been done by each member of a given team.


Micromanagement, abusive behavior, and political games – is it any wonder people are staying away from the office?

Management: Solution Or Problem?

For production teams, this should be old-hat, but it’s fairly incumbent that management works in the same way, and ironically, this is where the greatest resistance is likely going to come from. Traditional management has typically been more face-to-face in interaction (in part because senior management has also traditionally been more sales-oriented). The more senior the position, the more likely that person will need comprehensive real-time reporting, and the more difficult (and important) it is to summarize the results from multiple divisions/departments.

Not surprisingly, this is perhaps the single biggest benefit of a data-focused organization with strong analytics: It makes it easier for managers to see in the aggregate what is happening within an organization. It also makes it easier to see who is being productive, who is needing help, and who, frankly, need to be left behind, which include more than a few of those same managers.

https://www.theatlantic.com/ideas/archive/2021/07/work-from-home-be…

You cannot talk about productivity without also talking about non-productivity. This doesn’t come from people who are genuinely trying but are struggling due to a lack of resources, training, or experience. One thing that many of these same tools can do is to better highlight who those people are without putting them on the spot, and a good manager will then be able to either assign a mentor or make sure they do have the training.

Rather, it’s those workers who have managed to find a niche within the organization where they don’t actually do much that’s productive, but they seem to constantly be busy. Work from home may seem to be ideal here, but if you assume that this also involves goal-oriented metrics, it actually becomes harder to “skate” when working remotely, as there is a requirement for having a demonstrable product at the end of the day.

Finally, one of the biggest productivity problems with WFH/WFA has to do with micromanagement as compensation for being unable to “watch” people at work. This involves (almost physically) tying people to their keyboards or phones, monitoring everything that is done or said, and then using lack of “compliance” as an excuse to penalize workers.

During the worst of the pandemic, stories emerged of companies doing precisely this. Not surprisingly, those companies found themselves struggling to find workers as the economy started to recover, especially since many of these companies had a history of underpaying their workers as well. Offices tend to create bubble effects – people are less likely to think about leaving when they are in a corporate cocoon than when they are working from home, and behavior that might be prevalent within offices – gas-lighting, sexual harassment, bullying, overt racism, bosses not crediting their workers, and so forth – can be seen more readily when working away from the office as being unacceptable than they can when within the bubble.

There are multiple issues involved with WFH/WFA that do come into play, some legitimate. However, making the argument that productivity is the reason that companies want workers to come back to the office is at best specious. While it is likely more work for managers, a hybrid solution where the office essentially becomes a place where workers congregate when they do need to gather (and those times certainly exist) likely is baked into the cake by now especially as the Covid Delta variant continues to rage in the background. It’s time to move beyond Taylorism, and the fallacy of the time clock.

Kurt Cagle is the Community Editor of Data Science Central, a TechTarget property.

Source Prolead brokers usa

curiosity and inquisitive mindset keys to data science and life success
Curiosity and Inquisitive Mindset: Keys to Data Science – and Life – Success

In July 2014, Malaysia Airlines Flight 17 (MH17), a passenger flight from Amsterdam to Kuala Lumpur, was shot down over eastern Ukraine. Over the next four years, Bellingcat – an independent international collective of researchers, investigators, and citizen journalists – combined open source and social media data with an inquisitive and curious mindset to uncover proof of the Russians involvement in the MH17 tragedy. 

Bellingcat made a break in the case when it discovered videos and photos posted online that identified and tracked a Russian Buk TELAR missile system as it made its way through rebel-controlled territory into Ukraine.  Bellingcat identified that the Russian military was involved in the MH17 tragedy years before it was confirmed by European officials (Figure 1).

Figure 1: “How Bellingcat tracked a Russian missile system in Ukraine

Bellingcat identified the location of the convoy by comparing images posted online to satellite imagery. Matching multiple objects in the images, Bellingcat’s team determined the precise location for each image. The team used shadows in the images to determine the approximate time of day for the photo or video. The trail led them to Kursk, Russia and established that the missile launcher that shot down MH17 came from a Russian brigade[1].

I had my own personal inquisitive epiphany when I was doing research for my blog “Creating Assets that Appreciate, Not Depreciate, in Value Thru Cont…”.  I suspected that Tesla must be building a massive simulation environment in which to train the Full Self Driving (FSD) module behind Tesla’s autonomous vehicle plans.  However, I struggled to find details until it occurred to me to analyze Tesla’s job board! There, I uncovered this job posting for a Tesla Autopilot Simulation, Tools Engineer (note: link is no longer active):

“The foundation on which we [Tesla] build these [autonomous vehicle] elements (such as building tools to perform virtual test drives, generate synthetic data set for neural network training) is our simulation environment. We develop photorealistic worlds for our virtual car to drive in, enabling our developers to iterate faster and rely less on real-world testing. We strive for perfect correlation to real-world vehicle behavior and work with Autopilot software engineers to improve both Autopilot and the simulator over time.”

Yes, Tesla needed to build this massive cloud simulation environment where individual learnings from each of the 1M+ Tesla cars could share, reuse, and continuously refine those learnings (Figure 2).

Figure 2: Tesla Simulator Role in Driving Autonomous Vehicle Vision

An inquisitive mind, with a curiosity to explore the unknowns, is what humans do naturally.  I remember being young (once) and taking apart my dad’s radio to see how it worked (I later explained to him that the radio had extra parts when I put it back together).

Curiosity may be our most important human characteristic when compared to AI-powered machines that are continuously-learning and adapting.  Humans can’t learn faster than machines with AI/ML models and their nearly unlimited processing power, unbounded amounts of granular data, and wide range of “learning” mechanisms such as Machine Learning and Deep Learning and Reinforcement Learning and Transfer Learning and Federated Learning and Meta Learning and Active Learning (Figure 3).

Figure 3:  Different types of AI / ML Learning Algorithms

Unfortunately, society goes to great lengths to crush curiosity and an inquisitive mindset in favor of “standardization”.  As a youth, we have standardized classes with standardized curriculums sitting in standardized classrooms with standardized testing. No one is allowed to color outside the lines.  But the curiosity crushing doesn’t stop there because we take jobs in organizations with standardized org charts where employees sit like prairie dogs within standardized offices with their standardized job descriptions, standardized performance reviews, and standardized pay grades.

I fear that this “standardization” will lead to a “lowest common denominator” human development.  And to address the range and depth of problems that we face as a society, we MUST go beyond “standardization”.

The Harvard Business Review article “The Business Case for Curiosity” discusses the importance of nurturing curiosity and that inquisitive mindset.

“Most of the breakthrough discoveries and remarkable inventions throughout history, from flints for starting a fire to self-driving cars, have something in common: They are the result of curiosity. Curiosity is the impulse to seek new information and experiences and explore novel possibilities in search of new solutions to everyday problems and challenges.”

The article presents the five-dimensions of curiosity:

  • Deprivation sensitivity is recognizing a gap in knowledge the filling of which offers relief
  • Joyous exploration (my favorite dimension) is being consumed with wonder about the fascinating features of the world
  • Social curiosity is talking, listening, and observing others to learn what they are thinking and doing
  • Stress tolerance is a willingness to accept and harness the anxiety associated with novelty
  • Thrill seeking is being willing to take physical, social, and financial risks to acquire varied, complex, and intense experiences

The article is a good read if you are seeking ways to get more innovative thinking out of your teams (and yourself).

Why am I talking so much about curiosity and developing an inquisitive mindset? Because great data scientists don’t just think outside the box, they seek t….  Great data scientists are constantly seeking to explore, discover, test, and create or blend new approaches with create new variables that “might” be better predictors of performance (Figure 4).

Figure 4: Data Science Collaborative Engagement Process

Might” may be the data scientists (and humans) most powerful enabler.  “Might” grants us the license to explore, to follow our curiosity, to try different things, to fail and learn what doesn’t work, to try again, and eventually to come up with a better approach for solving wicked hard problems.

What can we do as leaders to encourage and nurture curiosity and that inquisitive mindset? Curiosity must be “allowed” to exist for curiosity to flourish.  And that’s where true leadership comes into play. Although many leaders say they value curiosity and an inquisitive mind, in fact many seek to stifle curiosity because curiosity thrives by challenging the status quo.  Many “leaders” treat curiosity as the enemy, as some sort of disease.

I believe Design Thinking is key to nurturing curiosity and an inquisitive mindset.  Design Thinking provides the mentality to accept that all ideas are worthy of consideration, that the best ideas won’t likely come from senior management, and that “diverge to converge” may be our most powerful ideation concept (Figure 5).

Figure 5: Blend Design Thinking with Data Science to Nurture Curiosity

Our society is facing wicked hard problems where “standardized” approaches just won’t work (and in fact, many of these “standardized” approaches have gotten us into this predicament).  Unleashing our natural curiosity and inquisitive minds is critical to addressing these problems.

In the preface of my new book “The Economics of Data, Analytics, and Digital Transformation”, I state the following:

“The COVID-19 pandemic has been exacerbated by incomplete and opaque data supporting suspect analytics, economic turbulence despite trillions of dollars spent in overly generalized financial interventions, and civil unrest from years of ineffective blanket policy decisions. The ability to uncover and leverage the nuances in data to make more effective and informed policy, operational, and economic decisions is more important than ever. However, improving decisions in a world of constant change will only happen if we create a culture of continuous exploring, learning, and adapting.”

The same old “standardized” business and operational processes won’t help us to create a culture of continuous exploring, learning, and adapting necessary for us to make the necessary informed policy, operational, and economic decisions to help us address these wicked hard problems.  We must embrace curiosity and an inquisitive mindset to explore, try, test, fail, try again, and try again until we discover / blend / create ideas that “might” lead to better business, operational, environmental, and society outcomes.

[1] “How Bellingcat tracked a Russian missile system in Ukraine” https://www.cbsnews.com/news/how-bellingcat-tracked-a-russian-missi…

Source Prolead brokers usa

geospatial modeling the future of pandemic analysis
Geospatial Modeling: The Future of Pandemic Analysis

 

  • Geospatial modeling may be the future of pandemic control.
  • Recent studies analyzed local data and found hidden trends.
  • Border control isn’t enough to stop the spread of Covid-19.
  • Where you live determines your risk for the disease.

Significant amounts of data have been collected, analyzed, and reported globally since the start of the Covid-19 pandemic, leading to a better understanding of how the disease spreads. Much of this data has been analyzed with geospatial modeling, which finds patterns in data that includes a geospatial (map) component. The modeling technique uses Geographic Information System (GIS), originally developed in the 1960s to store, collate, and analyze data about land usage [1]. Since its inception, GIS has since been used in an ever-increasing range of applications including modeling of human behavior in a geospatial context. More recently, the tool has been applied to Covid-19 data to analyze how the disease spreads globally (across national borders) and locally (within borders).

The bulk of Covid-19 geospatial modeling research has focused on global concerns like international travel, the effectiveness of border closures, and the spread of disease in a specific country taken as a whole.  Recently, studies have been applied at the local level—in cities, neighborhoods or specific rural areas. These local studies have revealed significant disparities in both Covid-19 testing and cases between different types of neighborhoods within cities; The results indicate that national border controls are not enough; the pandemic must also be tackled at a local level. Additionally, analysis has revealed that conclusions obtained from one country’s data cannot necessarily be applied to another country because of differences in social structures. 

The Spread of Covid-19 Isn’t Random

One ecological research paper [2], explored spatial inequities in COVID-19 confirmed cases, positivity, mortality, and testing in three U.S. cities for the first 6 months of the pandemic. The research concluded that socially vulnerable neighborhoods—those suffering from residential segregation and with a history of systematic disinvestment—had more confirmed cases, higher test positivity and mortality rates, and lower testing rates compared to less vulnerable neighborhoods.

A similar Canadian-based study [3] revealed that “Social injustice, infrastructure, and neighborhood cohesion” were characteristics of increasing incidence and spread COVID-19.  Maps of locales showed that hotspots were more likely to be found in disadvantaged neighborhoods:

The study concluded that cases are not randomly spread but spatially dependent. In other words, your odds of contracting and dying of the disease is higher if you live in a socio-economically disadvantaged area. The study authors urge that is that a tailor-made monitoring and prevention strategy—geared towards specific neighborhood issues—must be applied to COVID-19 mitigation policies to guarantee control of the disease.

Covid-19 Data Can’t be Generalized

Up until fairly recently, much of the pandemic modeling data came from China. However, while Covid-19 data from one country (in this case, China) may offer important insights about the spread of disease, it’s not always the case that those results will be applicable to other countries. This is likely because social and urban structures in China may be quite different from those in Europe and other countries.

One study using data from Catalonia, Spain [4], showed differing results when comparing global spatial autocorrelation between data from China and Catalonia, Spain.  Spatial autocorrelation describes the degree to which spatial location values are similar to each other The study found that the results from Catalonia showed no spatial autocorrelation with regards to Covid-19 statistics (with one minor exception), while studies using Chinese data showed strong spatial autocorrelation levels. In addition to differences between social constructs, one reason for the disparity may be that the Chinese data was gathered from a huge geographical area, so may have suffered from scale effect.

The Catalonia study concluded that there may be a spatial random pattern of positive cases. However, the authors noted a few anomalies that indicated the possibility of hidden local spatial autocorrelation for specific areas.

All three studies concluded that patterns of Covid-19 spread warrants measures to contain the virus on a local level (like city or town) as well as a global level. In other words, border controls are not enough to contain the virus unless resources target regional hotspots as well. 

References

[1] Overview of GIS History

[2]A first insight about spatial dimension of COVID-19: analysis at mu…

[3] Spatial Inequities in COVID-19 Testing, Positivity, Confirmed Cases… 

[4]  COVID-19 in Toronto: A Spatial Exploratory Analysis

Source Prolead brokers usa

taking a cloud native approach to software development microservices
Taking a Cloud-Native Approach to Software Development & Microservices

Taking a Cloud-Native Approach to Software Development & Microservices

Time for hardware and on-premises infrastructure has disappeared. With the emergence of cloud computing, most of the businesses, small or big, have already adopted or are transitioning to cloud native architecture to keep innovating in a fast and efficient manner. This approach leverages the benefits of cloud by using open-source software stack to develop and deploy easily scalable and resilient applications.

Through this blog, we will understand a cloud native approach why it matters in the world of software development.

Cloud Native Defined

Cloud native is an approach to developing, running, and optimizing applications by using advantages of cloud computing delivery model. This method allows developers to fully use cloud resources and integrate new technologies like Kubernetes, DevOps, microservices for rapid development and deployment.

In simple terms, cloud native approach is all about creating applications without worrying about the servers and underlying infrastructure. And this flexibility is one of the major advantages of using cloud native approach over monolithic architecture.

In fact, IDC research states that 90% of new enterprises will adopt cloud native approach by 2022.

This is clear that the cloud native approach will completely take over legacy systems in the near future. The on-premise physical server that doesn’t integrate with new systems and hinder innovations will be replaced by distributed servers.

Related: Which one to choose: Cloud Native vs Traditional Development

Cloud Native Applications

Cloud native applications are created as a composition of small, independent, and loosely coupled micro services. They are built to deliver significant business value – rapidly scale and incorporate feedback for continuous improvement. These microservices are packaged in Docker containers, containers are organized in Kubernetes and managed & deployed using DevOps workflows.

Docker containerization is a set of platform-as-a-service that packs all the software you need to run in one executable package known as container. These containers are OS independent and run in a virtualized environment. Kubernetes, on the other hand, is an open-source container orchestration service which is responsible for the management and scaling of containers. DevOps workflows enable software developers to release and update apps faster using agile processes and new automation tools. 

“The Cloud Native Computing Foundation (CNCF) found that containers popularity has picked up the pace and increased to 92% in production environment. In the current times, 91% enterprises have been leveraging Kubernetes, 83% of which is solely used in production process.”

Source: CNCF Survey Report

This simply means that organizations are increasingly adopting containerization or moving new workloads as they become more comfortable with containerization service.

Related: Legacy Application Modernization Doesn’t Mean Starting Over

Why Cloud Native Approach Matters in Software Development

Cloud native computing saves cloud resources and helps developers to build the right architecture while using best-of-the-breed technologies and tools. The architecture utilizes cloud services including EC2, S3, Lambda from AWS, etc. to support dynamic and agile development. Using this approach, a software application is divided into small microservices that centered around APIs for establishing connections. Buoyed by automatic capabilities, the architecture is isolated from the server and OS dependencies and managed through agile DevOps processes.  

Here are some key reasons why cloud native is a modern approach to software development.        

Flexibility and Scalability

The loosely coupled structure of microservices enables software developers to create applications on the cloud by choosing the best tools for the job depending on the specific business requirements. Put simply, software architects can choose appropriate data storage and programming languages that make most sense for the specific microservice.

Developers are no longer restricted to work on one technology. As advancements in technologies keep happening, software professionals can take advantage of them without worrying about how changes in one microservice affect the application as a whole. They can easily scale microservices independently, remove old ones, and add new ones, keeping the entire application code intact. They can make the non-functioning microservice temporarily unavailable while launching the latest version of the application. 

 

API based Communication

Cloud-native microservices rely on representational state transfer (REST) APIs for interaction and collaboration among microservices. APIs are a set of tools and protocols that are responsible for communication between applications and services. Such protocol level designs make sure that there is no risk associated with direct linking, shared memory models, or direct database retrievals to applications. For example, binary protocols are the perfect fit for communication between internal services. Other examples of modern open-source protocols include gRPC, Protobuff, and Thrift.

And this perfectly aligns with what Icreon has offered to ASTM, world’s largest standards development organization, by integrating APIs to create a well-designed information system that connects all ASTM datapoints together in real-time and ensures right data gets to the right place. The platform is built atop Azure Cloud that facilitates the organization to scale up the system whenever the requirement shows up.

DevOps and Agile Frameworks

Microservices are best fit for agile frameworks that work on DevOps and CI (continuous integration) and CD (continuous delivery) models. Unlike monolithic approach, DevOps workflows create an environment wherein software development is an ongoing cycle. In cloud native development, multiple cross-functional teams follow these core principles of DevOps to simultaneously work on different features/modules of an application. CI/CD approach also allows developers to embrace feedback or changing requirements (ongoing automation) throughout the development lifecycle from testing to delivery and deployment.

Related: Making the Case for Agile Software Development in 2021

Final Thoughts

Cloud native architecture brings software development to the next level, equipping developers with new tools and technologies to deliver more quality products and solutions faster. In addition, the development using appropriate methodologies and tools makes significant savings in computing resources in the clouds. 

The transition from monolithic approach to cloud native development enables organizations to stay ahead of the competition. Containers make it easier to distribute an application amongst team members, run them in different environments and deploy the same container in production. Microservices have introduced a new way to structure your system, improve encapsulation, and create maintainable small units or services that can quickly adapt to new requirements. 

About the Author  

Paul Miser is the Chief Strategy Officer of Icreon, a digital solutions agency and Acceleration Studio. He is also the author of Digital Transformation: The Infinite Loop – Building Experience Brands for the Journey Economy. 

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA
error: Content is protected !!