This week was a hallmark for this editor. With a jab of a needle, I entered the ranks of the inoculated, and in the process realized how much the world (and especially the way that we earn a living) has changed.
Many companies are grappling with what to do post-pandemic, even as COVID-19 continues to persist worryingly in the background. Do they bring workers back to the office? Will they come if called? Does it make sense to go fully virtual? To embrace a hybrid model?
Workers too are facing uncertainty, not all of it bad. The job market is tightening, and companies are struggling to find workers. Work from home (or more properly, work from anywhere) has proven to be wildly popular, and in many cases, people are willing to walk away from tens of thousands of dollars a year for the privilege. This has forced companies, already struggling with getting new employees, to reconsider how exactly they should interact with their workforce to a degree unthinkable before the pandemic. Much of this comes down to the fact that AI (at a very broad level) is reducing or even eliminating the need to be in office for most people. Indeed, one of the primary use-cases of AI is to be both vigilant when problems or opportunities arise and to be flexible enough to know who to call when something does go wrong.
On a related front, AIs are increasingly taking over in areas that may be seen as fundamentally creative. Already, generated personalities are becoming brand influencers as GANs become increasingly sophisticated. Similarly, Google’s GPT-3 engine is beginning to replace writers in generating things like product descriptions and press releases. Additionally, robot writers are making their way into generating working code based upon the intent of the “programmer”, a key pillar of the no-code movement.
Finally, robotic process automation is hacking away at the domain of integration, tying together disparate systems with comparatively minimal involvement of human programmers. Given that integration represents upwards of 40% of all software being written at any given time, the impact that this has upon software engineers is beginning to be felt throughout the sector. That this frees up people to deal with less-repetitive tasks is an oft-stated truism, but this also has the impact of changing people from being busy 24-7 to be available only in a more consultative capacity, with even highly skilled people finding those skills utilized in a more opportunistic fashion.
The nature of work is changing, and those people who are at the forefront of that change are now involved in an elaborate dance to determine a new parity, one where skillsets and the continuing need to acquire them are adequately compensated, and where automation is tempered with the impacts that automation has on the rest of society.
These issues and more are covered in this week’s digest. This is why we run Data Science Central, and why we are expanding its focus to consider the width and breadth of digital transformation in our society. Data Science Central is your community. It is a chance to learn from other practitioners, and a chance to communicate what you know to the data science community overall. I encourage you to submit original articles and to make your name known to the people that are going to be hiring in the coming year. As always let us know what you think.
Data integration is defined as gathering data from multiple sources to create a unified view. The process of the consolidated data avails users with consistent access to their data on a self-service basis. This gives you a complete picture of key performance indicators (KPIs), customer journeys, market opportunities, and so on.
Following are a list of seven reasons why you need a data integration strategy for your organizations
Keeps up with the evolution of data
Sensors, networking, and cloud storage are all becoming more affordable, resulting in a vast amount of data. AI and machine learning technology can make sense of it all, with capabilities far exceeding those of humans. All that is required is for data from all sources to be combined, and the algorithms will work!
Makes data available
Accessible data is a huge benefit for your business; it’s as easy as that! Imagine that all of your company’s employees, or your business partners, could have access to centralized data. It will be easier and encouraging for your personnel to make reports and keep all processes up to date. Making reports and keeping all processes up to date will be easier and much more encouraging for your personnel.
Eliminating security issues
Having access to all forms of continuously updated and synchronized data makes it easier to use AI and machine learning solutions to analyze any suspicious activity and decide how to handle it, or even set up automatic algorithms.
Improve data transparency
With a data integration plan, you can improve all of your interfaces and handle complexity while obtaining maximum results and the best information delivery
Makes data more valuable
Data integration adds more value to the data. In DI solutions, Data quality approaches are becoming more common; these techniques discover and improve data characteristics, making it cleaner, more consistent, and more complete. Because the datasets are aggregated and calculated, they become more useful than raw data.
Simplifying data collaboration
Integrated and available data opens up a whole new universe of possibilities for internal and external collaboration. With the available data in the correct format, anyone depending on your statement will have a far more effective impact on the processes.
Fueling smarter business decisions
Using organized repositories with several integrated datasets, you may achieve a remarkable level of transparency and knowledge throughout the entire organization. Never before accessible nuances and facts will now be in your hands, allowing you to make the correct decisions at the right moment.
The correct data integration methods can translate into insights and innovation for years to come. Consider your needs, your goals, and which type of approach matches both, so you make the best decision for your business.
Among the many decisions you’ll have to make when building a predictive model is whether your business problem is either a classification or an approximation task. It’s an important decision because it determines which group of methods you choose to create a model: classification (decision trees, Naive Bayes) or approximation (regression tree, linear regression).
This short tutorial will help you make the right decision.
Classification – when to use?
Classification works by looking for certain patterns in similar observations from the past and then tries to find the ones which consistently match with belonging to a certain category. If, for example, we would like to predict observations:
Is a particular email spam? Example categories: “SPAM” & “NOT SPAM”
Will a particular client buy a product if offered? Example categories: “YES” & “NO”
What range of success will a particular investment have? Example categories: “Less than 10%”, “10%-20%”, “Over 20%”
Classification – how does it work?
Classification works by looking for certain patterns in similar observations from the past and then tries to find the ones which consistently match with belonging to a certain category. If, for example, we would like to predict observations:
With researched variable y with two categorical values coded blue and red. Empty white dots are unknown – could be either red or blue.
Using two numeric variables x1 and x2 which are represented on horizontal and vertical axes. As seen below, an algorithm was used which calculated a function represented by the black line. Most of the blue dots are under the line and most of the red dots are over the line. This “guess” is not always correct, however, the error is minimized: only 11 dots are “misclassified”.
We can predict that empty white dots over the black line are really red and those under the black line are blue. If new dots (for example future observations) appear, we will be able to guess their color as well.
Of course, this is a very simple example and there can be more complicated patterns to look for among hundreds of variables, all of which is not possible to represent graphically.
Approximation – when to use?
The approximation is used when we want to predict the probable value of the numeric variable for a particular observation. An example could be:
How much money will my customer spend on a given product in a year?
What will the market price of apartments be?
How often will production machines malfunction each month?
Approximation – how does it work?
Approximation looks for certain patterns in similar observations from the past and tries to find how they impact the value of a researched variable. If, for example, we would like to predict observations:
With numeric variable y that we want to predict.
With numerical variable x1 with value that we want to use to predict the first variable.
With categorical variable x2 with two categories: left and right, that we want to use to predict the first variable.
Blue circles represent known observations with known y, x1, x2.
Since we can’t plot all three variables on a 2d plot, we split them into two 2d plots. The left plot shows how the combination of variables x1 and x2=left is connected to the variable y. The second shows how the combination of variables x1 and x2=right is connected to the variable y.
The black line represents how our model predicts the relationship between y and x1 for both variants of x2. The orange circle represents new predictions of y on observation when we only know x1 and x2. We put orange circles in the proper place on the black line to get predicted values for particular observations. Their distribution is similar to blue circles.
As can clearly be seen, distribution and obvious pattern of connection between y and x1 is different for both categories of x2.
When a new observation arrives, with known x1 and x2, we will be able to make new predictions.
Discretization
Even if your target variable is a numeric one, sometimes it’s better to use classification methods instead of approximation, for instance, if you have mostly zero target values and just a few non-zero values. Change the latter to 1, in this case you’ll have two categories: 1 (positive value of your target variable) and 0. You can also split the numerical variable into multiple subgroups: apartment prices for low, medium, and high by the equal subset width, and predict them using classification algorithms. This process is called discretization.
In this article, I illustrate the concept of asymmetric key with a simple example. Rather than discussing algorithms such as RSA, (still widely used, for instance to set up a secure website) I focus on a system easier to understand, based on random permutations. I discuss how to generate these random permutations and compound them, and how to enhance such a system, using steganography techniques. I also explain why permutation-based cryptography is not good for public key encryption. In particular, I show how such as system can be reverse-engineered, no matter how sophisticated it is, using cryptanalysis methods. This article also features some nontrivial, interesting asymptotic properties of permutations (usually no taught in math classes) as well as the connection with a specific kind of matrices, yet using simple English rather than advanced math, so that this article can be understood by a wide audience.
1. Description of my public key encryption system
Here x is the original message created by the sender, and y is the encrypted version that the receiver gets. The original message can be described as sequence of bits (zeros and ones). This is the format in which is it is internally encoded on a computer or when traveling through the Internet, be it encrypted or not, as computers only deal with bits (we are not talking about quantum computers or quantum Internet here, which operate differently).
The general system can be broken down into three main-components:
Pre-processing: blurring the message to make it appear like random noise
Encryption via bit-reshuffling
Decryption
We now explain these three steps. Note that the whole system processes information by blocks, each block (say 2048 bits) being processed separately.
1.1. Blurring the message
This steps consist in adding random bits at the end of each block (sometimes referred to as padding), then perform a XOR to further randomize the message. The bits to be added consist of zeroes and ones in such a proportion that the resulting, extended block has roughly 50 percent of zeroes and ones. For instance, if the original block contains 2048 bits, the extended blocks may contain up to 4096 bits.
Then, use a random string of bits, for instance 4096 binary digits of square root of two, and to a bitwise XOR (see here) with the 4096 bits obtained in the previous step. The resulting bit string is the input for the next step.
1.2. Actual encryption step
The block to be encoded is still denoted as x, though it is assumed to be the input of the previous step discussed in section 1.1, not part of the original message. The encryption step transforms x into y, and the general transformation can be described by
Here * is an associative operator, typically the matrix multiplication or the composition operator between two functions, the latter one usually denoted as o as in (f o g)(x) = f(g(x)). The transforms K and L can be seen as permutation matrices. In our case they are actual permutations whose purpose is to reshuffle the bits of x, but permutations can be represented by matrices. The crucial element here is that L * K = L^n = I (that is, L at power n is the identity operator): this allows us to easily decrypt the message. Indeed, x = L * y. We need to be very careful in our choice of L, so that the smallest n satisfying L^n = I is very large. More on this in section 2. This is related to the mathematical theory of finite groups, but the reader does not need to be familiar with group theory to understand the concept. It is enough to know that permutations can be multiplied (composed), elevated to any power, or inversed, just like matrices. More about this can be found here.
That said, the public and private keys are:
Public key: K (this all the sender needs to know to encrypt the block x as as y = K * x)
Private keys: n and L (kept secret by the recipient); the decrypted block is x = L * y
1.3. Decryption step
I explained how to retrieve the block x in section 1.2 when you actually receive y. Once a block is decrypted, you still need to reverse the step described in section 1.1. This is accomplished by applying to x the same XOR as in section 1.1, then by removing the padding (the extra bits that were added to pre-process the message).
2. About the random permutations
Many algorithms are available to reshuffle the bits of x, see for instance here. Our focus is to explain the most simple one, and to discuss some interesting background about permutations, in order to reverse-engineer our encryption system (see section 3).
2.1. Permutation algebra: basics
Let’s begin with basic definitions. A permutation L of m elements can be represented by a m-dimensional vector. For instance L = (5, 4, 1, 2, 3) means that the first element of your bitstream is moved to position 5, the second one to position 4, the third one to position 1, and so forth. This can be written as L(1) = 5 , L(2) = 4, L(3) = 1, L(4) = 2, and L(5) = 3. Now the square of L is simply L(L), and the n-th power is L(L(L…))) where L appears n times in that expression. The order of a permutation (see here) is the smallest n such that L^n is the identity permutation.
Each permutation is made up of a number of usually small sub-cycles, themselves treated as sub-permutations. For instance, in our example, L(1) = 5, L(5) = 3, L(3) = 1. This constitutes a sub-cycle of length 3. The other cycle, of length 2, is L(2) = 4, L(4) = 2. To compute the order of a permutation, compute the orders of each sub-cycle. The least common multiple of these orders is the order of your permutation. The successive powers of a permutation have the same sub-cycle structure. As a result, if K is a power of L, and L has order n, then both L^n and K^n are the identity permutation. This fact is of crucial importance to reverse-engineer this encryption system.
Finally, the power of a permutation can be computed very fast, using the exponentiation by squaring algorithm, applied to permutations. Thus even if the order n is very large, it is easy to compute K (the public key). Unfortunately, the same algorithm can be used by a hacker to discover the private key L, and the order n (kept secret) of the permutation in question, once she has discovered the sub-cycles of K (which is easy to do, as illustrated in my example). For the average length of a sub-cycle in a random permutation, see this article.
2.2. Main asymptotic result
The expected order n of a random permutation of length m (that is, when reshuffling m bits) is
For details, see here. For instance, if m = 4,096 then n is approximately equal to 6 x 10^10. If m = 65,536, then n is approximately equal to 2 x 10^37. It is possible to add many bits all equal to zero to the block being encrypted, to increase its size m and thus n, without increasing too much the size of the encrypted message after compression. However, if used with a public key, this encryption system has a fundamental flaw discussed in section 3, no matter how large n is.
2.3. Random permutations
The easiest way to produce a random permutation of m elements is as follows.
Generate L(1) as a pseudo random integer between 1 and m. If L(1) = 1, repeat until L(1) is different from 1.
Assume that L(1), …, L(k-1) have been generated. Generate L(k) as a pseudo random integer between 1 and m. If L(k) is equal to one of the previous L(1), …, L(k-1), or if it is equal to k, repeat until this is no longer the case.
Stop after generating the last entry, L(m).
I use binary digits of irrational numbers, stored in a large table, to simulate random integers, but there are better (faster) solutions. Also, the Fisher-Yates algorithm (see here) is more efficient.
3. Reverse-engineering the system: cryptanalysis
To reverse-engineer my system, you need to be able to decrypt the encrypted block y if you only know the public key K, but not the private key L nor n. As discussed in section 2, the first step is to identify all the sub-cycles in the permutation K. This is easily done, see example in section 2.1. Once this is accomplished, compute all the orders of these sub-cycle permutations and compute the least common multiple of these orders. Again, this is easy to do, and this allows you to retrieve n even though it was kept secret. Now you know that K^n is the identity permutation. Compute K at power n-1, and apply this new permutation to the encrypted block y. Since y = K * x, you get the following:
Now you’ve found x, problem solved. You can compute K at the power n-1 very fast even if n is very large, using the exponentiation by squaring algorithm mentioned in section 2.1. Of course you also need to undo the step discussed in section 1.1 to really fully decrypt the message, but that is another problem. The goal here was simply to break the step described in section 1.2.
To receive a weekly digest of our new articles, subscribe to our newsletter, here.
About the author: Vincent Granville is a data science pioneer, mathematician, book author (Wiley), patent owner, former post-doc at Cambridge University, former VC-funded executive, with 20+ years of corporate experience including CNET, NBC, Visa, Wells Fargo, Microsoft, eBay. Vincent is also self-publisher at DataShaping.com, and founded and co-founded a few start-ups, including one with a successful exit (Data Science Central acquired by Tech Target). You can access Vincent’s articles and books, here.
With stagnating consumption and restrictions on individual and merchandise flow, the footwear and apparel industry is quickly overhauling its strategy for the “new normal”. COVID-19 is forcing the industry to be more focused, sustainable and efficient in its processes. In doing so, it is also forcing the industry to become more digital savvy. The challenge before industry leaders is to be more “connected” across the value chain, while being “physically disconnected”. How will the industry operations look a few months from now as it is re-shaped by the crisis?
If anything is certain, it is this: the industry will adopt additional digital assets to drive product development, adopt technology solutions that connect processes and move business towards more sustainable and resilient processes.
The PLM solutions that the footwear and apparel industry uses were not designed for an eventuality like COVID-19. As an example, there is over reliance on physical samples, which right now are difficult to make and ship. This has presented a major challenge in the development process. Add to that the problem of getting multiple iterations through the sampling process. For an industry where “physical review” and “in-use” evaluation of the performance characteristics of materials are of immense importance, this can be very problematic. The challenge is to ensure that the stakeholders are connected and are able to meaningfully perform their activities (in case of samples) in a collaborative environment, while being able to work around the lockdown and lack of co-location imposed by COVID-19.
Brooks Running, the brand behind high-performance running shoes, clothing, and accessories headquartered in Seattle, USA, is finding ways to address the challenges. “We are now dipping our toes in 3D design and modelling,” says Cheryl Buck, Senior Manager, IT Business Systems, Brooks Running, who is responsible for the FlexPLM implementation in her organization. “We are examining visual ways to create new products and ensuring extremely tight communication (between functions).”
The goal is to improve collaboration, build end-to-end visibility coupled with accurate product development information (schedules, plans, resources, designs, testing, approvals, etc.) that reduces re-work and delivers faster decisions. The key to enabling this is to make real-time data and intuitive platforms with a modern UI available to all stakeholders. The result is improved productivity and efficiency, despite the barriers placed by COVID-19 for business as we had known earlier. Smart manufacturers like Brooks Running are going one step further. They are using role-based applications to extend the ability of their PLM (see figure below). These apps cater to the needs of individual users.
(App segmentation is illustrative, two or more segments can be combined to create apps to support business needs)
Brooks Running has found that the app ecosystem meets several process expectations. “It is difficult to configure PLM to individual needs,” observes Buck, “So providing special functional apps is a great way to address the gaps.” A readily available API framework allows Brooks Running to exchange information between systems and applications. The framework is flexible to support scale and is also cloud compatible.
For Brooks Running, COVID-19 has become a trigger for being creative and smoothening its processes. The organization has installed high resolution cameras to better view products and materials being shared in video conferences, and is incorporating 3D modelling into its processes. This makes it easier for stakeholders to assess designs as they can see more detail (testing has been moved to locations that are less affected by COVID-19 and when that is impossible, Brooks Running uses virtual fitting applications). The end result is that teams can easily share product development, acquire feedback and move ahead. The system keeps information on plans and schedules updated, shares status and decisions, and keeps things transparent.
Brooks Running finds that a bespoke Style Tracker application designed for the organization is proving to be extremely helpful in that it provides a reference point for styles in their development path within the season, tracks due dates for stakeholders, signals what needs to be done to get to the next check point and provides a simple way for leadership to track progress. “The style tracking app is a big win for us,” says Buck.
The experience of Brooks Running provides footwear and apparel retailers with a new perspective on the possibility to improve PLM outcomes and ROI:
Provide user group/process specific and UI rich tools
Enable actionable insights
Leverage single or multiple source of data
Enable option for access across mobile platforms
Make future upgrades easy and economical
Provide an easy way for incorporating latest technology platforms like 3D, Artificial Intelligence, RPA, Augmented Reality/Virtual Reality
FlexPLM is a market leader in the retail PLM space and combining it with ITC Infotech’s layer of apps provides FlexPLM partners with an easy, efficient and scalable mechanism to align the solution to the needs and expectations of the individual business in a way that has minimum impact on future upgrades, which is a significant plus. In addition, the app framework makes alignment across different configuration easy, which is an added advantage. Organizations that don’t want to modify their Flex implementation will find immense appeal in the app ecosystem to extend the capabilities of their PLM—especially as COVID-19 makes it necessary to bring innovation and ingenuity to the forefront.
Authors:
Cheryl Buck
Senior Manager, IT Business Systems, Brooks Running
When working with a team of software developers, many clients find that its structure includes quite a lot of people. And while the responsibilities of developers are quite clear, things can get complicated with business managers, product managers, and other team members.
In this article, we will explain how software development teams structure their work, what the roles and responsibilities of each team member are, and how to find out if your team is working well with your product.
Approaches to team structure:
There are several ways to organize a flexible development team – universal, specialized, and hybrid.
Universal:
A team consisting of people with a wide range of skills and experience is called «universal». Such teams are usually responsible for the complex development of the entire project or a separate function. This is the most common project team structure for outsourcing companies.
Pros of the universal approach:
Each team member is well-versed in the product, so they can focus on improving it as a whole.
Everyone is competent enough to do their job without being dependent on others, but still can be involved in the development of many project elements.
Cons of the universal approach:
Since no one’s knowledge is very specific, it is sometimes necessary to attract a new team member in the middle of the project.
Specialized:
A «specialized» team consists of specialists with specific skills to solve narrow problems. Each professional has a niche and therefore takes full responsibility for their project element. This is also quite common for software development teams.
Pros of the specialized approach:
The specific member is involved in a specific project element without crossing other modules.
The team can build complex high-quality systems very quickly.
Cons of the specialized approach:
Since everyone works individually, there is a possibility that various components will conflict with each other.
There may be gaps in communication due to a lack of common knowledge.
Hybrid:
The «hybrid» structure of the project team is essentially a combination of universals and specialists. Such teams work on the project as a whole but can narrow down their tasks if necessary. A hybrid approach is the best of both worlds.
Pros of the hybrid approach:
There are both specialists who create individual components and universals who supervise system integration.
The development process is as efficient as possible.
Cons of the hybrid approach:
It can be difficult to coordinate people with different approaches to the work process.
Creating a hybrid team is time-consuming and expensive.
Traditional vs Agile team
Traditional team:
Project management is top-down. The project manager is responsible for the execution of the work;
Teams can work on several projects simultaneously;
The organization measures individual productivity;
Clear roles and positions;
No team size limit;
Employees are called human resources.
Agile team:
A self-organized and self-governing team. The role of a project manager is to train the team, remove obstacles, and avoid distractions;
Teams focus on one project at a time;
The organization evaluates team performance;
Cross-functional teams, skills trump ranks;
Three to nine people on the team;
Employees are called talents.
Roles in a traditional development team
Business Analyst (BA)
It is a person that connects the development and client side in the development process. BA communicates with the client, reveals all the details, problems, a vision of the solution, formalizes considerations in the form of a document and transfers the result to the developers.
Project Manager (PM)
The main task of a PM is to manage all processes for the development of a new product. This includes ensuring technical implementation, managing technical teams, generating detailed reporting for further analysis and improvement of business processes, and monitoring the deliverables to be ready on time.
UX/UI Designer
A UX/UI designer is a specialist who is primarily concerned with how a user interacts with a product and visualises the system interface. UX/UI designers explore different approaches to solving a specific user problem. Their main task is to make sure that the product flows logically from one step to the other.
Developers (Front-end/ Back-end, Mobile)
No software project is possible without developers. Depending on the end software platform and the future product stack, there’s a variety of different types of developers, including web developers (front-end, back-end) and mobile developers (iOS, Android). Frontend developers work with the client side of the future program and create the future software interface. Backend developers work with the server side and build the product functionality that the user doesn’t see. In turn, mobile developers create the future application for various mobile devices.
Quality Assurance Engineer (QA)
The job of a Quality Assurance engineer is to control the correctness of all stages of development and the correct work of the final product. They are engaged not only in checking the work of the application but also in monitoring compliance with standards in software development, interacting with developers, designers, customers, preventing the appearance of bugs and errors in the software.
What’s an Agile team?
An Agile team is a cross-functional and self-organized team that is responsible for delivering a new end-to-end product (from start to finish). And a team like this has the resources to do it.
You need an Agile team to deliver the product to the market faster, reduce the amount of micromanagement, and better fit the client’s needs.
The cross-functional and self-organized nature of an Agile team means that the result is a common responsibility. Therefore, the team members work not only within their respective competencies but also make sure to help each other.
You need an Agile team when there is a complex product with a high degree of uncertainty and risk. The Agile team delivers the product more frequently and receives feedback from the customer much earlier. As a result, the solution emerges as a more valuable and viable product that generates profits.
Working for an Agile team
The development of complex products requires a flexible approach to work. The process is therefore characterized by regular situations that call for competence in several areas simultaneously. That’s when different team members have to cooperate to solve a problem. Also, an Agile team is characterised by the fact that new pressing tasks emerge quite often, and sometimes they require attention more than the previous ones. In situations like that, the team adjusts to the new realities, implementing the basic principle: «Readiness for change is more important than following the original plan». Of course, teams try to avoid that. To do this, for example, Scrum introduced sprint work, and no new tasks could be assigned to the team during the sprint. Since a sprint only lasts a short time, usually two weeks, the next sprint will allow the team to take on a new task, and thus bring more business value at a brisker pace.
Roles in an Agile Team
Product owner
The product owner is the person who manages the product on behalf of the company. This person is responsible for ensuring that the product creates value for customers and users, as well as for the company that provides access to it. To fulfil their responsibilities, the product owner must maintain contact with users, collaborate with the development team, and understand how the company works.
Scrum master
Scrum is a technique that helps teams work together. As a sports team prepares for a deciding game, the team must learn from experience, self-organize as they work to solve a problem and analyze their successes and failures to continually improve.
The Scrum Master maintains the scrum culture in the team and ensures that its principles are adhered to. This person follows the values of this methodology in everything while being flexible and always ready to use new opportunities for the benefit of the team’s work process.
Development team
The development team consists of such people as project managers, business analysts, UX/UI designers, frontend and backend developers, QA engineers and other necessary people. As you can see, just a part of an Agile team has all the members of the traditional development team.
Some advice about implementing Agile methodology
Plan releases in advance
Agile application places a lot of emphasis on planning. Teamwork from the early stages allows for a common understanding and idea of the product, as well as helps to prioritize and make compromises.
Some teams visualize product capabilities when planning a release to work with customers to develop a list of functional requirements. This process often unlocks new opportunities, helps group user stories and determine their order. The resulting data will help the team make decisions before developing the project. By taking the time to research early on, you will create a solid foundation for your future product.
Make UX research before starting the sprint
It is difficult to create a design and develop it immediately within the same sprint. Two weeks is not enough to do research, create a site skeleton and design, and develop user stories.
The agile methodology assumes equal distribution of user experience/interface and product development. Ideally, you should finish your research and design before starting the sprint. For example, a UX specialist creates the design of the first screen of the site during the first sprint. During the second sprint, the developers take the completed design and write the code, while the designers work on other pages.
Build a collaboration culture
Social skills are the key to the success of Agile projects. No wonder. According to the Agile Manifesto, people and collaboration are more valuable than techniques and tools. Well-functioning information transfer processes are essential for any digital product development company. However, in an Agile environment where deadlines are short and time-limited, collaboration is even more important.
Some organizations work using design thinking techniques: coming up with new ideas and brainstorming. This is done to stimulate effective communication and teamwork.
Work in iterations and don’t try to make it perfect right away
Many practitioners opt for short cycle or iteration development. Start with low-res prototypes (sketches, wireframes) and work through them based on feedback from users and clients. In other words, make mistakes quickly and often.
Using wireframes is natural for Agile application processes: team members can quickly test their ideas with a small investment of effort and money. It’s much easier to fix design flaws than after the code has been written.
Take part in scrum meetings
Of the four types of daily scrum meetings, “standing” meetings were approved by the majority of experts. They are held daily in one place and last no more than 15 minutes. The main purpose of the “standing” meeting is to keep everyone in the loop informed and identify problems.
Sometimes people protest against having daily meetings because with all the rest of the communication (backlog keeping and planning, showing results and retrospective analysis), they are wasting valuable work time. However, this ritual is needed so that all team members are in the know, work harmoniously and, in case of a problem, work as a single organism.
Turn user research into a team event
Testing the system in terms of usability has a positive effect on decision making. Even with tight deadlines, agile teams can incorporate custom research into their workflow. This discovery dispels the myth that such testing is too laborious or expensive. There are different options. For example, weekly testing by a group of users is one possible approach. It’s a great idea to turn a usability test into a team event where team members (and project participants) observe and participate in discussions. The general alignment of design decisions based on user data, rather than on subjective opinion or unverified assumptions, speeds up the work process.
Constant interaction with product clients
Experts emphasize the importance of customers’ participation in key stages of the project development. This concept is consistent with the principles of Agile application: to value cooperation with a client more than a contract on paper. Contracts are important, but if they get in the way of teamwork, they only hinder progress.
Companies work with customers in different ways. Techniques range from building a core leadership team early and building partnerships between UX professionals and clients, to inviting customers for custom reviews and regular presentations.
Establish clear roles and responsibilities in the team
It is important that team members fully understand their role and the roles of their colleagues. For effective Agile work, project participants need to know what is expected from them and what is within their sphere of influence.
According to the traditional Agile methods, teams and their roles are defined, except user specialists. Scrum teams generally don’t do UX. It is especially important to set the right expectations and help people think about user experience in an Agile context, because UX roles and processes may not be familiar to the rest of the team.
Improve your methodology
As they evolve, Agile teams experiment with different techniques and adapt them to their environment. Apply interactive design not only to the user interface but to the entire project.
Agile is a foundation that helps structure work but doesn’t say how you should run a project. It encourages teams to organize themselves, find ways to work more efficiently and simplify processes.
Leading development teams are becoming more multidisciplinary and flexible than ever before. Successful teams use mixed approaches. Agile was invented as opposed to traditional corporate practices. It has its merits, it can be perfectly combined in work.
Conclusion
Nowadays, the Agile approach is probably the best option for a software development team. If you are looking for a software development company to entrust with making the application of your dreams, try to consider a team with an Agile structure. A team that follows modern methodologies can help you create outstanding software and get maximum profit from both your partnership and the final product.
I got my second Pfizer Covid shot today, which means that I’m now part of the growing post-Pandemic population. Having been more or less in quarantine from the middle of March of 2020, I’m more than ready enough to leave behind the masks, to visit the coffeeshop and the barber and the gym, to be able to go into a restaurant without trying to eatarounda 3-ply paper facial covering.
Concerns remain, of course, including strains of COVID-19 that seem to be resistant floating around, but we are also reaching a stage where the health care system is less and less likely to be overwhelmed by the virus, in whatever form it takes, which has always been the primary purpose of the lockdown. Here in Washington State, most restrictions should probably be dropped by the middle of June 2021.
However, in the next several months, it is very likely that the following scenario will be repeated over and over again. The company that you work for sends out email notices saying that, with the pandemic now in the rearview mirror, workers will be expected to return full time to the office, or face being fired.
Some people will do so without reservation, having missed the day-to-day interactions of being in an office. Many of them will be managers, will be older, and will likely be looking forward to being able to see everyone working productively just like the old days. While a broad oversimplification, let’s call them Team Extrovert. They thrive in the kind of social interactions that working in the office brings, and they enjoy the politics that comes from having a group of people forced to work in the same area daily.
The members of Team Introvert, on the other hand, are awaiting such emails with dread. For the first time in years, they’ve actually managed to be very productive because there have been far fewer unnecessary interruptions of their day, they could actually work later into the evening, could do so in an environment that they could control (more or less) and in general, could make the most of the time that they had.
Going back to work was going to mean giving up that flexibility. It will mean once again finding babysitters to get their kids to daycare or school, and dealing with a medical emergency will mean hours not working. It will mean losing a couple of hours a day in commuting on top of the eight-hour days that they work, will mean that if they are in meetings all day they will also have to spend the evenings getting the work done they couldn’t get done during the day. It will mean dealing with the unwanted sexual attention, the disapproval of the office gossips, the uncompensated late nights.
Evaluating Assumptions
The last year has been a boon for work researchers, as one study after another has field-tested several key assumptions about the work-life balance:
False: Technology Is Too Immature
“The technological underpinnings necessary to work from home were insufficient to being able to support it.”
Zoom came out of nowhere as a free-to-use telepresence platform to largely dominate the space in under a year. Market leaders in that space stumbled badly as they failed to recognize the need to be able to provide no-cost/low-cost telepresence software in the first months of the Pandemic. Microsoft pivoted quickly to provide similar software Teams for organizations that worked with their increasingly ubiquitous online Office suite, and Slack picked up the slack (in conjunction with a whole ecosystem of other tools) to fill out the collaboration space.
One significant consequence that’s now fully coming into fruition: Otter.ai – an online transcription service using machine learning-based algorithms, has now partnered with Zoom to enable auto-transcription (including figuring out who is speaking) and embedded call-to-action items into their generated output. This means that one of the most onerous tasks of conducting a meeting – creating an accessible record of the conversation – can happen automatically.
The upshot of this is that due to the pandemic, online teleconferences have suddenly become searchable and by extension manipulatable. The impact of this on businesses will be profound, if only because every meeting, not just those few that can afford to have a stenographer present, creates a referenceable record. This also has the potential to make meetings more productive, as it can help a manager identify who is actually providing valuable input, who is grandstanding, and who is being shut out. This form of collaboration is much harder to replicate in person.
False: Collaboration Requires Proximity
This is the watercooler argument. Collaboration, or so goes the story, is centered around the watercooler, where people can meet other people at random within the organization, and through conversations, learn about new ideas or work out problems. Admittedly, today the watercooler is more likely a Keurig coffee machine or it’s equivalent, but the idea – that collaboration occurs due to chance encounters between different people when in informal settings – is pretty much the same.
The problem with this is that it is a bit simplistic. Yes, ideas can come when people meet in informal settings, one of the reasons that conferences are actually pretty effective for stimulating new ideas, but the real benefit comes primarily due to the fact that the people involved are typically not in the same company, or in many cases not even in the same industry. Instead, during these encounters, people with different viewpoints (and different cultural referents) end up discussing problems that they have, and the approaches that they used to solve similar problems in different ways.
The key here is the differences involved. This is one of the reasons that consultants can be useful, but the more they become embedded within an organization, the less valuable their contributions. They are valuable primarily because they represent a different perspective on give problem, and it is the interaction that consultants have with a given, established group that can spark innovation.
Collaboration tools, such as Slack, provide both a way for disparate people to interact and provide a record of that interaction that can be mined after the fact. This kind of chat is largely asynchronous while being more immediate than other asynchronous communication channels such as email. Not surprisingly, programmers and technical people, in general, take to this form of collaboration readily, but members of Team Extravert (who tend to do better with face-to-face communication) often avoid utilizing this kind of collaboration because it doesn’t work as well for establishing social dominance, and because it isn’t synchronous.
False: Working in the Office Is More Secure
The idea that working in an office would be more secure than working remotely is a compelling one, since even a decade ago that would likely have been true. However, several changes, some pre-pandemic, some intra-pandemic, have changed that landscape dramatically.
For starters, by 2020, a significant number of companies had already begun moving their data infrastructures to the cloud, rather than using on-prem services. Sometimes the reasons came down to cost – you were no longer paying for physical infrastructure – but part of it also came down to the fact that cloud service providers had a strong incentive to provide the best protection they could to their networks. Most of the major data breaches that took place in the last ten years occurred not with large scale providers such as AWS or Azure, but with on-prem data storage facilities.
Additionally, once the Pandemic did force a lockdown, all of those supposedly secure on-premise data stores were left to sit idle, with skeleton crews maintaining them. The Pandemic hastened the demise of stand-alone applications, as these became difficult to secure, forcing the acceleration of web based services.
The global decision to move towards https: in 2017 also had the side effect of making man-in-the-middle attacks all but non-existent, and even keystroke analyzers and other sniffing tools were defeated primarily because the pandemic forced workers to distribute, making such hacking tools much less effective. Similarly, one of the biggest mechanisms for hacking – social hacking, where spies would go into company office buildings and note passwords and open ports or would conduct dumpster diving in company trash bins, was foiled primarily because the workforces was distributed.
Hacking still exists, but its becoming more difficult, and increasingly, companies are resorting to encrypted memory chips and data stores, and not social engineering, to keep data secure. Now, it is certainly possible that an enterprising data thief could make a surreptitious run of laptops at the local coffeeshop, but again, this is a high cost, low profit form of hacking.
False: Remote Workers Are Less Productive Than Onsite Workers
This is the tree falling in a forest principle: If people are not being watched over, how do you as a manager know that they are actually working rather than browsing the Internet for porn or otherwise wasting time. The reality is that you don’t. The question is whether you should assume that they will do the latter without some kind of oversight.
One of the most significant things to come out of the Agile movement is the shift in thinking away from allocation of hours as a metric of completion to whether or not the work itself is being done. DevOps has increasingly automated this process, with the notion of continuous integrated builds, which works not just for developing software but for assembling anything digital. In general, this means that there are tangible artifacts that come out of doing one’s work consistently, and a good manager should be able to judge whether someone is working simply by looking at what they are producing.
Indeed, this notion of an audit trail is something that remote workers are well aware of. With most productivity tools now tied into some kind of auditable dashboard, a manager should be able to tell when there are problems without in fact actually needing to see the people involved. However, for some managers this ability is a two edged sword, as their managers have access to the same dashboards, the same drawdown charts, and the same bug reports, rendering their oversight role redundant. This points to a bigger problem.
A number of recent studies have shown that when people have clear goals and targets, direct oversight can actually be counterproductive, because the workers, far from doing their best work, become worried that they are being judged unfairly for taking risks that could provide benefits. Put another way, this heavy oversight makes it harder for them to concentrate, which is what they are being paid for in the first place. In this light, the worker is branded a criminal who is setting out to deliberately steal from or sabotage their employers. Not surprisingly, this can become a self-fulfilling prophecy, as these employees leave to find better employment under less stringent oversight, causing disruption in their wake.
Managing a remote workforce is different, and can be especially difficult when one’s definition of management comes down to ensuring that workers are engaged and on task. There are comparatively few jobs, especially in-office jobs, that require 9 to 5 engagement. People need time to think, to plan, and to re-engage in interrupted tasks. This is especially true when dealing with mental activities. Context switching, reorganizing thoughts and concentration to go from one task to another, takes time, and the more the need for concentration, the longer such context switching takes.
One interesting phenomenon that has taken hold during the Pandemic has been that many businesses now concentrate their meetings during the middle of the week, rather than scheduling them at random through the week. arguably, this can be seen as creating a longer weekend, but in practice, what seems to happen is that people tend to use their Mondays and Fridays (or one of the weekend days) as days to concentrate on specific tasks without interruption. This helps them accomplish more with less stress. Workers still may provide a written report at the end of the day (or the week) summarizing what they’ve done, but this becomes part of a work routine, rather than an interruption, per see.
False: On-Site Work Improves Company Morale and Employee Loyalty
I’ve attended a few company picnics, holiday parties, and corporate retreats over the years. They are supposed to enliven morale and increase esprit de corps, but, in reality, they are fields filled with social land mines where you interact as little as possible with anyone that is not in your immediate group for fear of running afoul of a Senior Vice President or, worse, the Head of Human Services.
Company culture is an artificial construct. It does have its place, but all too often the putative company culture is set out in an employer’s handbook that everyone is supposed to read but few actually do. The actual company culture is mostly imitative, where one follows the whims and actions of the senior stake-holders in the company, even if these are potentially toxic.
Ironically, as the pandemic fades as a factor, corporate get-togethers may actually replace being in the office as the dominant mode of interaction. There are signs that this is already happening, especially as corporations become more distributed.
False: Everyone Wants To Return To The Office
Managers tend to be extroverts. They prefer face-to-face interaction, and in general are less likely to want to read written reports, even though these usually contain more information about how the projects are going. In some cases, written reports also make it harder to achieve deniability in case something does go wrong, though in this case you could argue that this is just a sign of bad management.
However, a significant percentage of the knowledge worker-based workforce are introverts. Introverts make up roughly 30% of the population, but that 30% corresponds to writers, designers, artists, programmers, analysts, architects, librarians, scientists, musicians, and other people who spend much of their time working in comparative solitude. When you factor this in, the number of people who are actually likely to be in offices probably comes closer to 55-60% of everyone who was in an office before.
This changes the dynamics of returning to the office (not returning to work itself, as none of these people deliberately stopped working) considerably. Most were more productive in the last eighteen months than they have been in years. Because they could work with minimal interruption (and that mostly asynchronous) and because they could control their environment, those largely introverted workers were able to use their skills more effectively.
Additionally, without the need to hire locally, companies could hire people anywhere. This had some unintentional side effects, as workers in San Franciso and New York migrated in droves towards smaller communities that were within an air-flight commute if need be, but were no longer forced into paying seven digits for a small house. A lot of companies made such positions contingent upon onsite-post-covid, but that provision may be impossible to enforce, especially as tightening labor markets in this critical sector make it a no-brainer for many people to opt to work elsewhere that has a more liberal work-from-home policy.
A recent Linked-In study made the startling discovery that, in their survey of members, if given a choice between working from home and a $30K bonus to return, 67% said they would prefer the work from home option. While this isn’t necessarily scientific, it hints at the reluctance that many people have to going back to the daily grind.
Are We At a Tipping Point?
As workers come reluctantly back to the office, there are several points of uncertainty that may very well derail such efforts.
The pandemic is not yet over.One of the principle reasons for the initial lockdown was to keep the healthcare system from being overwhelmed. In those places where the lockdown was followed closely, this never happened. In those places where it wasn’t, the healthcare system was overwhelmed. With multiple variants still arising, and a vaccine that likely still needs additional tweaking, the prospect of the pandemic continuing (or flaring back up) for at least another year is not out of the question.
Social resistance.Ironically, there are now many people who have taken to heart the social restrictions, and it will still be months, if not years, before, the habits ingrained into people during the pandemic are overcome.
The threat of lawsuits.If a person goes back into the office, contracts COVID-19, and is left dead or crippled, is the employer liable? This really hasn’t been tested in the courts yet, and until it is, employers will be reluctant to be the first ones to find out.
Worker mobility.Similarly, a tightening job market and two demographic trends are seeing many people retiring in 2021, fewer people entering the job market, and the natural consequences of ignoring employee loyalty playing out with more defections to new opportunities.
Tech Trends.The technology for doing work from anywhere is clearly in the direction of distributed work. This means that financially, the benefits of bringing people in-house are far outweighed by distributing that same workforce.
What Do Managers Do Now?
If your company has not already been planning a post-pandemic strategy, hoping that things will return to normal, it’s likely getting almost too late to do so. Things will not return to the way that they were pre-pandemic, simply because the pandemic has accelerated where we would most likely have been ten years from now to today. This means that the post-pandemic organization is going to look (and act) very different than what it was in 2019.
There are several things that can be done to make the transition as painless as possible for all concerned.
Is The Office Even Required?
A number of companies had been skirting the edges of going fully virtual even before the pandemic, and as the lockdowns dragged on, decided to make the jump to a facility-less existence. The move cut down considerably on business costs, and when the need to meet in person (or meet with clients) came up, these same companies would rent out hotel conference space at a considerable savings. These companies and others like them are crunching the numbers to see if having the extensive physical presence is really all that necessary any more.
Evaluate Worker Safety
The pandemic is not going away. Rather, we are reaching a stage where COVID-19 is becoming manageable and survivable for most, albeit still a very dangerous disease for some. Before workers can return to the office, establishing a vaccine screening protocol should be considered carefully, with workers who are not yet vaccinated being considered high risk for return, and social distancing protocols should likely be taken into account for some time even when lockdowns are rolled back. It may also be worth bringing back employees in staggered tranches, with enough time between these (at least a month) to determine whether or not the virus is still spreading locally.
Triage
Identify those positions that absolutely must be on premises at all times, those that need to be on premises 2-3 days a week, and those that can work comfortably remotely. Be critical about your assumptions here: a facilities manager needs to be available or have back up, but an analyst or programmer usually does not. If someone is already working remotely and are at a physical remove, leave them there. After six months, re-evaluate. Chances are pretty good that you might you need fewer people onsite than you think.
Make Management and Reporting Asynchronous
To the extent possible, make tracking and reporting something that does not rely solely upon face-to-face meetings. Whether this involves utilizing Zoom or Teams-like meetings more, Slack, various DevOps tools or distributed office productivity tools, take advantage of the same tools (many of them part of the Agile community) that your teams are already using to communicate in an auditable fashion. Additionally, every day, each person should include a log entry at the end of the day indicating where they are, what they are working on, and what issues are needed. It is the responsibility of the managers to ensure that if conversations DO NEED to happen, that they are facilitated.
Move Towards 3-2-2
Move as many of your meeting as possible towards the center of the work week – Tuesday through Thursday – then treat Monday and Friday as concentration days, with minimal meetings but higher expectations. Demos and reports should always be held during a midweek-block. Similarly, identify core hours for availability during the week for team collaboration.
Improve Onboarding
Onboarding, whether of a new employer to the company or to a different group, is when you are most likely to see new hires quit, especially if badges or other access issues delay the process. Identify an onboarding partner within the group who is responsible for helping the newbie get access to what they need in a timely fashion, and who can help these new hires get ramped up as fast as possible. While this is useful for any new recruit, it’s especially important in distributed environments.
Buddy Up
When possible, buddy up coworkers in the same team so that they are also in the same time zone. If your operation is on the US East Coast but you have workers in Denver and Seattle, then those workers should be paired off. This provides an additional channel (and potential backup) for communication, while at the same time keeping people from having to work awkward hours because of time zone differences. This holds especially true for transnational teams.
Eliminate Timesheets
You are hiring people for their technical or creative expertise, not their time in seat. You can track hours through other tools (especially agile ones) for determining how long tasks take for planning purposes, but by moving to a goal oriented, rather than hour oriented approach, you reward innovation and aptitude rather than attendance.
Make Goals Clear
On the subject of goals, your job as manager is to make sure that the goals that you want out of your hires are clear. This is especially true when managing remote workers, where it is easier to lose sight of what people are doing. You should also designate a number two who can work with the other team members at a more technical or creative level but can also help ensure that the goals are being met. This way, you, as a remote manager, can also interface with the rest of the organization while your number two interfaces with your team (this is YOUR partner).
Hold Both Online and Offline Team Building Exercises
Start with the assumption that everyone is remote, whether that’s a desk in the same building, a coffeeshop, their house, or a beach with good wifi access. Periodically engage in activities that help promote team building, from online gaming to coffee klatches but that don’t require physical proximity. At the same time, especially as restrictions ease, plan on quarterly to annual conventions within the organization, perhaps in conjunction with conferences that the organization would hold otherwise. Ironically, it is likely that these meetings will become more meaningful because in many cases the only real contact you otherwise have with the organization is a face in a screen.
Don’t Penalize Remote Workers Or Unduly Reward Onsite Ones
Quite frequently, organizations have tended to look upon remote work as a privilege rather than a right, and it is a privilege that comes at some cost: if you’re not in the office, you can become effectively invisible to those that are. This means that when establishing both personal and corporate metrics, that you take this bias, along with others, into account to determine who should be advanced. Additionally, if your goal is near to full virtualization, it’s worth taking the time to identify who is most likely to be against such virtualization and understand their objectives. There are people who want to build personal fiefdoms who see virtualization as being deleterious to those ends. That too can contribute to corporate culture, and in a very negative way.
Summary
There are signs that many parts of the world are now entering into a period of overemployment, where there will simply not be enough workers to fill the jobs that are available. The pandemic has forced the issue as well, accelerating trends that were already in place by at least half a decade if not more. Because of this, company strategists who are relying upon the world going back to the way things were before are apt to be shocked when they don’t.
Planning with the assumption that work from anywhere is likely to be the dominant pattern of employment is likely to be a safe bet. For most organizations it provides the benefit of being able to pull upon the talents of people without having to physically disrupt their lives, giving them an edge against more traditional organizations that can’t adapt to that change. With better productivity and collaboration tools, the mechanisms increasingly exist to make remote work preferable in many respects to onsite work, but it does require management to give up some perceived control and discomfort with adaptation. Finally, the workers themselves may have the final say in this transition, voting with their feet if they feel there are better opportunities with more flexibility elsewhere.
Kurt Cagleis the Community Editor ofData Science Central, and the Producer ofThe Cagle Report, a weekly analysis of the world of technology, artificial intelligence, and the world of work.
The Non-Fungible Token (NFT) craze is an environmental blitz.
What’s behind the massive energy use.
Solutions to the problem.
The high environmental toll of NFTs
A couple of weeks ago, I wrote an article outlining the Non-Fungible Token (NFT) process. For a brief moment, I considered turning some of my art into NFTs. That moment soon passed after I realized the staggering environmental toll behind buying and selling NFTs. While fads come and go with little impact (anyone remember Cabbage Patch dolls?), NFTs are unique in that they not only cause environmental harm when they are first bought and sold, but they continue to consume vast amounts of energyevery time a piece is resold. In other words, while purchasing used goods, like clothing or housewares, has a fraction of the environmental impact of buying new, the same is not true of NFTs. Every time any NFT transaction takes place, whether it’s for a newly minted piece or for a resale, massive amounts of energy are required to fuel the transaction.
Why do NFT Transactions Consume So Much Energy?
The enormous environmental costs associated with NFTs is tied to the way the network they are built on is secured. Ethereum, the blockchain which holds the NFTs, uses a compute intensive Proof-of-work (PoW) protocol to prevent double spending, economic attacks, and other manipulations [1]. PoW was designed to be computationally inefficient: Basically, the more complexity involved in creating one, the higher the security [2].
The validation of ownership and transactions via PoW is based on search puzzles of hash functions– cryptographic algorithms that map any-size inputs of any size to a unique output of a fixed bit length. These challenging puzzles, which must be solved by networks, increase in complexity according to the price of cryptocurrency, how much power is available, and how many requests there are for new blocks [3]. As NFTs take off, demand surges and the entire systems struggles to keep up, fueling demand for more and more warehouses, more cooling, and more electricity consumption.
Many organizations and individuals have attempted to ballpark the exact carbon footprint of NFTs, and most of those paint the process in a poor light. A single NFT transaction has been estimated to have a carbon footprint equal to the energy required to:
Keep the lights on for 6 months (or more) in an art studio [4].
Produce 91 physical art prints [5]
Mail 14 art prints [6],
Process 101,088 VISA transactions [7],
Watch 7,602 hours of YouTube [7],
Drive 500 miles in a standard American gas-powered car [8].
Although it is challenging to ascertain the exact environmental cost of NFTs, much work has been reported on the more established Bitcoin, which runs on similar principles. For example, Elon Musk’s recent dabbling with PoW-based Bitcoin used so much energy in just a few days, that is negated the amount of carbon emissions reduced by every Tesla ever sold [9].
Digital artist Everest Pipkin, writing on the state of cryptoart in a blog post, states
“This kind of gleeful wastefulness is, and I am not being hyperbolic, a crime against humanity” [10].
What is the Solution?
Steps have been taken toward more energy efficiency. For example, Ethereum is attempting to move to a more energy efficient consensus mechanism called proof-of-stake (PoS). However, this is faltering out of the starting gate. A post on the Ethereum website states “…getting PoS right is a big technical challenge and not as straightforward as using PoW to reach consensus across the network.” [11] In other words, while we wait (potentially for years) for Ethereum to “get in right”, we’re busy polluting the atmosphere like it’s 1972.
Some digital artists have attempted to make their transaction carbon neutral by planting trees or creating sustainable farms, but their efforts have backfired. For example, artist John Gerrard recently created a “carbon-neural” NFT video piece called Western Flag [12]. The carbon-neutrality was a result of investment in a “a cryptofund for climate and soil”. However, Gerrard’s piece caused more buzz for NFTs, fueling more creations by more artists—most of whom did not even attempt to offset their transactions by planting trees [9]; Not that planting trees to alleviate emissions guilt works anyway. Critics have equated tree planting offset schemes as nothing more than a fig leaf [13].
“Robert” by Zac Freeman.
The real solution? Pass this fad by. Instead, support artists who create sustainable art, like assemblage artist Zac Freeman. Freeman, a resident artist at CoRK arts district in Jacksonville, Florida, creates art in the real-world from found objects: throwaway items like used Lego bricks, paper clips and plastic bottle tops.
“If I can get my art in front of 10,000 people and get them to think about disposable goods and cultural consumerism,” says Freeman, “I’ve achieved my goal.”
For the environmental cost of a single NFT transaction, you can get Zac (or any other artist) to ship you 14 prints. Or, for the price of one animated flying cat with a pop-tart body [14], you can commission Zac to create assemblage pieces of your entire extended family. I know which option I would choose.
Bothe MEAN and MERN are full stack frameworks with Java coded components. The difference is that MEAN uses Angular JS while MERN uses the React JS developed by Facebook. Both aids developers to make reactive and intuitive UI. To understand which stack is the better one, we need to understand the underlying differences between them.
DIFFERENCES BETWEEN MEAN AND MERN
MEAN: Components include Mongo DB, Angular JS, Express, and Node. MERN: Components include Mongo DB, React JS, Express, and Node.
MEAN: JavaScript development stack. MERN: Open source JavaScript library.
MEAN: Uses Typescript language. MERN: Uses JavaScript and JSX.
MEAN: Bidirectional data flow. MERN: Unidirectional dataflow.
Both tech has high class features and immense functionality. The slight upper hand that MERN enjoys is in the learning curve. MERN is easier to grasp because the learning curve differs between Angular JS and React JS. Let us take a deeper dive into the benefits ofMEANand MERN stacks to understand the power of each of these stacks fully.
BENEFITS OF MEAN AND MERN
MEAN STACK
All types of applications can be developed easily.
Various plug ins and widgets have compatibility with this stack. For development that has a stricter time frame, this comes in handy.
The functionality skyrockets due to the availability of plug ins.
Developers enjoy community support since the framework is open source.
Real time testing is possible with the built-in tools.
A single language is used for back end and front end. This increases coordination and gets applications to respond faster.
MERN STACK
Front end and back end are covered by a single coding script.
The entire process can be completed using only JAVA and JSON.
Seamless development through the MVC architecture.
Real time testing through built-in tools.
Runs on an open source community and the source code can be easily modified.
According to Hacker Rank development skill report, 30% of developers went with Angular JS while 26% stayed with React JS. The report also mentions that 30% of the programmers wanted to learn React JS and 35.9% of developers prefer to develop using React JS, thus MERN stands slightly above MEAN when it comes to popularity.
As far as we know, in terms of ease of understanding and popularity, MERN is at the forefront now. Let us take a detailed comparison to understand who will win the race in 2021.
MEAN vs MERN : A DETAILED COMPARISON
Scalability, Security: Both MEAN and MERN are equally secure. However, in terms of scalability, MERN is at the forefront.
MVC: For enterprise level apps, a complete architecture needs to be maintained. MEAN is the better option for this.
UI: For an advanced and simple UI, MERN is the go-to stack. MERN facilitates user interaction.
CRUD: For CRUD (create, read, update, delete), MERN is the ideal stack. The React JS handles data changes quickly and has a good user interface as well.
Support: The Angular JS in MEAN supports HTTP calls and unites the back-end. Enterprise level app development will require third party.
libraries. On the other hand, React JS improves functionality through its supplementary libraries. MEAN scores slightly above in this section.
MEAN enhances the experience through the use of third party extensions while MERN would require additional configurations to do this.
In aspects of the learning curve, UI, scalability, and CRUD, MERN stack scores more than MEAN stack. However, in the aspects of community support and MVC MEAN stays ahead. In terms of security both are at par. However, the application of the stacks depend entirely on the business needs.
MEAN is more affordable, and is the first choice for startups and SMEs. Switching between clients and servers is easier. For real time web apps, MEAN is definitely the best choice. In MERN, the Virtual DOM enhances user experience and gets the developer’s work done faster. A stable code is maintained by React JS due to a unidirectional data flow. For coding for Android and IOS using JavaScript, MERN is definitely the way to go.
TAKE AWAY
Companies like Accenture, Raindrop, Vungle, Fiverr, UNIQLQ, and Sisense among others use MEAN in their tech stacks. Brands such as UberEats, Instagram, and Walmart use MERN stack. Both the stacks provide an incredible user experience. Stability and scalability can be achieved with both stacks.
From this we can conclude that enterprise level projects require MEAN over MERN. MERN makes rendering UI simpler. Both are reliable for a quick front end development.
MEAN is good for large scale application. MERN is good for faster development of smaller applications.
At Orion we have an excellent team that can help you with all your MEAN andMERN stack developmentneeds.
I’m starting to see the big consultancies and advisory services coming out with their lists of “what’s hot” from a data and analytics perspective. While I may not have the wide purview of these organizations, I certainly do work with some interesting organizations who are at various points in their data and analytics journey.
With that in mind, I’d like to share my perspective as to what I think will be big in the area of data and analytics over the next 18 months.
Contextual Knowledge Center. A contextual directory, on AI steroids, that facilitates the identification, location, reuse, and refinement (including version control) of the organization’s data and analytic assets (such as workflows, data pipelines, data transformation and enrichment algorithms, critical data elements, composite metrics, propensity scores, entity or asset models, ML models, standard operating procedures, governance policies, reports, dashboard widgets, and design templates). It upon an organization’s data catalog by integrating contextual search, Natural Language Processing (NLP), asset scoring, graph analytics, and a decisioning (recommendation) engine to recommendation data and analytic assets based upon the context of the user’s request.
Autonomous Assets. These are composable, reusable, continuously-learning and adapting data and analytic assets (think intelligent data pipelines and ML models) that appreciate, not depreciate, in value the more that they are used. These autonomous assets produce pre-defined business and operational outcomes and are constantly being updated and refined based upon changes in the data and outcomes effectiveness, with minimal human intervention. This could apply to almost any digital asset including data pipelines, data transformation and enrichment algorithms, process workflows, AI / ML analytic models (like Tesla’s Fully Self Driving or FSD module), and standard operating procedures and policies. Yea, this is probably one of my top 3 topics.
Entity Behavioral Models: These Analytic Profiles capture, codify, share, re-use, and continuously-refine the predicted propensities, patterns, trends and relationships for the organization’s key human and device (things) assets…at the level of the individual asset. This is the heart of nanoeconomics, which is the economics of individual human or device predicted propensities. It is Entity Behavioral Models or Analytic Profiles that drive the optimization of the organization’s key business and operational use cases.
AIOps / MLOps. This is an emerging IT field where organizations are utilizing big data and ML to continuously enhance IT operations (such as operational task automation, performance monitoring, load balancing, asset utilization optimization, predictive maintenance, and event detection, correlation, and resolution) with proactive and dynamic insights.
DataOps. An automated, process-oriented methodology to improve model quality and effectiveness while reducing the cycle time in the training, testing and operationalizing data analytics. DataOps is an integrated suite of data management capabilities including best practices, automated workflows, data pipelines, data transformations and enrichments, and architectural design patterns.
Data Apps / Data Products. Data apps or data products are a category of domain-centric, AI-powered apps designed to help non-technical users manage data-intensive operations to achieve specific business and operational outcomes. Data apps use AI to mine a diverse set of customer, product, and operational data, identify patterns, trends, and relationships buried in the data, make timely predictions and recommendations with respect to next best action, and track the effectiveness of those recommendations to continuously refine AI model effectiveness.
Software 2.0. An emerging category of software that learns through advanced deep learning and neural networks versus being specifically programmed. Instead of programming the specific steps that you want the software program to execute to produce a desired output, Software 2.0 uses neural networks to analyze and learn how to produce that final output without defining the processing steps and with minimal human intervention. For example, Software 2.0 using neural networks can learn to differentiate a dog from a cat versus trying to program the characteristics and differences between a dog and a cat (good luck doing that!).
AI Innovation Office. The AI Innovation Office is responsible for the testing and validation of new ML frameworks, career development of the organization’s data engineering and data science personnel, and “engineering” of ML models into composable, reusable, continuously refining digital assets that can be re-used to accelerate time-to-value and de-risk use case implementation. The AI Innovation Office supports a “Hub and Spoke” data science organizational structure where the centralized “hub” data scientists collaborate with the business unit “spoke” data scientists to engineer (think co-create) the data and analytic assets. The AI Innovation Office supports a data scientist rotation program where data scientists cycle between the hub and the spoke to provide new learning and development opportunities.
Data Literacy, Critical Thinking, and AI Ethics. AI will impact every part of your organization not to mention nearly every part of society. Consequently, there is a need to train everyone on data literacy (understanding the realm of what’s possible with data), critical thinking (to overcome the natural human decision-making biases), and ethical AI (to ensure that the AI models are treating everyone equally and without gender, race, religious, or age biases). Everyone must be prepared to think critically about the application of AI across a range of business, environmental, and societal issues, and the potential ethical ramifications of AI model false positives and false negatives. Organizations must apply a humanistic lens from which to ensure that AI will be developed and used to the highest ethical standards.
Well, that’s it for this 2021. And if we can avoid another pandemic or some other major catastrophe, I’m sure that next year will be even more amazing!