Search for:
new study warns get ready for the next pandemic
New Study Warns: Get ready for the next pandemic

  • A new study warns we’re likely to see another major pandemic within the next few decades.
  • New database of pandemic info used to calculate increased probability.
  • A major pandemic will likely wipe out human life within 12,000 years.

For much of the past century, the fear has been that a calamity like an asteroid strike, supernova blast, or environmental change would wipe out humanity. But new research, published by Duke Global Health Institute, is pointing to a much different demise for humans. The study, called Intensity and frequency of extreme novel epidemics and published in the ‘Proceedings of the National Academy of Sciences’, used almost four hundred years’ worth of newly assembled data to make some dire predictions [1].

The authors used the new database along with estimated rates of zoonotic diseases (those transmitted from animals to humans) emerging because of environmental change caused by humans. “These effects of anthropogenic environmental change,” they warn, “may carry a high price,”

A New Database Paints a Picture

One of the reasons that there has been a distinct lack of research into the probability of another pandemic has been a lack of access to data, short observational records, and stationary analysis methods.

The conventional theory of extremes, like major pandemics, assumes that the process of event occurrence is stationary—where shifts in time don’t cause a change in the shape of the distribution.  But the authors found that pandemic data was nonstationary. While long term observations and analysis tools for nonstationary processes were available in other disciplines, global epidemiological information on the topic was “fragmented and virtually unexplored”.  

The team addressed the problem, in part, by creating a new database containing information from four centuries of disease outbreaks, which they have made publicly available in the Zenodo repository along with the MATLAB code that analyzed it [3].  A snapshot of the database is shown below:

The database, which contains data from 182 historical epidemics, led the authors to conclude that while the rate of epidemics varies wildly over time, the tail of the probability distribution for epidemic intensity (defined as number of deaths, divided by global population and epidemic duration) slowly decays. The implication is that the probability of another extreme epidemic decreases slowly with epidemic intensity. However, this doesn’t mean that the probability of another epidemic is smaller—it’s just the opposite.

When the authors combined the model with increasing rates of disease emergence from animal reservoirs linked to environmental change, they found that the probability of observing another serious pandemic–currently a lifetime risk of around 38%—will likely double in the next few decades.

A New Pandemic is Around the Corner

Novel pathogens like Covid-19 have been emerging in human population at an increasing rate in the last half a century. This new study estimates that the probability of a novel disease outbreak will grow from its current risk of about 2% a year to around three times that. The researchers used that risk factor to estimate another major pandemic will very likely happen within 60 years—which is much sooner than originally anticipated—making it very likely you will see another major pandemic in your lifetime.

That’s not to say you’ll have to wait until you’re 80-years-old to see another nefarious virus sweep across the globe. The event is equally probable in any one year during that time frame, said Duke University professor Gabriel Katul, Ph.D., one of the paper’s authors. “When a 100-year flood occurs today, one may erroneously presume that one can afford to wait another 100 years before experiencing another such event,” says Katul. “This impression is false. One can get another 100-year flood the next year.”

In addition to warning about the perils of ignoring human-induced environmental changes, the authors extrapolated the data to make another dire prediction: In the press release [2], they state that it’s statistically likely that within the next 12,000 years, the human race will die out due to a major pandemic, which means it’s extremely unlikely mankind will be around when the next extinction-level asteroid hits Earth.

References

Mask picture Tadeáš Bednarz, CC BY-SA 4.0 via Wikimedia Commons

[1] Intensity and frequency of extreme novel epidemics

[2] Statistics say large pandemics are more likely than we thought

[3] A Global Epidemics Dataset (1500-2020)

Source Prolead brokers usa

understanding self supervised learning
Understanding Self Supervised Learning

In the last blog, we discussed the opportunities and risks of foundational models. Foundation models are trained on a broad dataset at scale and are adaptable to a wide range of downstream tasks. In this blog, we extend that discussion to learn about self-supervised learning, one of the technologies underpinning foundation models.

NLP has taken off due to Transformer-based pre-trained language models (T-PTLMs). Transformer-based models like GPT and BERT are based on transformers, self-supervised learning, and transfer learning. In essence, these models build universal language representations from large volumes of text data using self-supervised learning and then transfer this knowledge to subsequent tasks. This means that you do not need to train the downstream(subsequent) models from scratch.  

In supervised learning, training the model from scratch requires many labelled instances that are expensive to generate.  Various strategies have been used to overcome this problem. We can use Transfer learning to learn in one context and apply it to a related context. In this case, the target task should be similar to the source task. Transfer learning allows the reuse of knowledge learned in source tasks to perform well in the target task. Here the target task should be similar to the source task. The idea of transfer learning originated in Computer vision, where large pre-trained CNN models are adapted to downstream tasks by including few task-specific layers on top of the pre-trained model, which are fine-tuned on the target dataset.

Another problem was: Deep learning models like CNN and RNN cannot easily model long-term contexts. To overcome this problem, the idea of transformers was proposed. Transformers contain a stack of encoders and decoders, and they can learn complex sequences.

The idea of Transformer-based pre-trained language models (T-PTLMs) evolved by combining transformers and self-supervised learning (SSL) in the NLP research community. Self-supervised learning allows the transformers to learn based on the pseudo supervision provided by one or more pre-training tasks. GPT and BERT are the first T-PTLMs developed using this approach.  SSLs do not need a large amount of human-labelled data because they can learn from the pre-trained data.

Thus, Self-Supervised Learning (SSL) is a new learning paradigm that helps the model learn based on the pseudo supervision provided by pre-training tasks. SSLs find applications in areas like Robotics, Speech, and Computer vision. 

SSL is similar to both unsupervised learning and supervised learning but also different from both. SSL is similar to unsupervised learning in that it does not require human-labelled instances. However, SSL needs supervision via the pre-training stage (like supervised learning). 

In the next blog, we will continue this discussion by exploring a survey of transformer-based models

Source: Adapted from

AMMUS : A Survey of Transformer-based Pretrained Models in Natural …

Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, and Sivanesan Sa…

Image source pixabay – Children learning without supervision

Source Prolead brokers usa

7 ways to show kindness with your remote marketing team
7 Ways To show Kindness with Your Remote Marketing Team

I believe that remote working brings with it improved diversity and a better understanding of other countries and cultures.

But you’ll often hear detractors of marketing teams dispersed geographically talking about how you lose something when staff works remotely. Without the proverbial water cooler to chat by, or after work drinks in the local dive bar, there’s no way that a team can connect. Of course, that’s not true – there are plenty of ways to help your remote crew come together and I should know. I’ve been working remotely for more than 5 years now and sharing my tips on how to work from home.

It is true, though, that it takes extra thought to And kindness can be a big part of helping people feel appreciated. So, here are some tips about encouraging kindness in your dispersed marketing team.

Celebration

In an office environment, birthdays are often a big deal. Traditions vary from place to place, but bringing in cakes for colleagues or going out for drinks are all common happenings. Then there’s weddings, new babies and so on where a card goes around the building in an envelope collecting signatures and small donations that are used to purchase a group gift to support any hobbies they might have.

How do you make that happen remotely? Like most things, it’s possible to do all that from a distance. You need to use the right tools, and to make a little bit more effort.

Something Sweet

There are online bakeries that will send cake anywhere in the world. So, technically it is possible to send out a cupcake or cookie to celebrate an event. But that would also mean sharing home addresses, and generally be a lot more bother and expense than grabbing a box from Krispy Kreme on the way in.

As an alternative, how about asking everyone to have a tasty treat with them at the next daily stand-up? Dedicate the first or last few minutes of the meeting to toasting the birthday girl, or congratulating the new Daddy? Looking at the different baked treats that people bring can be an icebreaker and is a great way to start conversations about different cultures,

Gifts for your remote marketing workers

PayPal, Venmo, Google Wallet…all these and more are ways that you can send money to someone regardless of where they are in the world. When that’s done, where you buy the gift from is your choice. If you plan far ahead enough, most suppliers can get your delivery there on time. If you leave it to the last minute, then it’s probably best left to Amazon to fulfill.

Yes to chitchat

Having a channel that is specifically dedicated to chatting it’s key. If you haven’t already implemented this advice then World Kindness Day seems like a good time to start.

Encourage your staff to use it, to share what’s going on in their lives, big or small. To wish each other good morning, or goodnight, and check in on how they’re doing. Share jokes. Share memes. It all helps to create a positive working environment.

Positive Feedback

Thank you is a powerful word. Appreciating what others have done should be a part of the daily stand up. But sometimes, kindnesses are small and don’t need to be publicly recognised. For times like that, it’s great to have a way your team can express themselves.

There are a few tools that can help with that. Something like the Slack chatbot, HeyTaco! for example. Where users can send each other virtual tacos as a quick and fun thank you gesture for helpful advice. Another idea is the Virtual Kudos Box  or a team awards system that is nominated from within.

Include everyone in the meeting

If you’ve got new staff, give them a thorough onboarding process. Welcoming the new guy is a surefire way to help them integrate into the team, and as well as being kind, that helps boost your productivity. And when you’re chairing a meeting, keep track of who is talking and nudge the reluctant ones to join in. Yes, some of us are more introverted, but we all feel good when we’re asked for our opinion.

No to Gossip

The polar opposite of kindness, is when people start talking behind other’s backs. It doesn’t matter what they’re saying; it’s the divisiveness that’s a problem. Make it clear that you aren’t going to tolerate a culture of moaning. One rule that’s often talked about it, ‘Don’t come to me with a problem unless you have a solution.’

One quote often attributed to Buddha (but actually the work of Victorian poet, Mary Ann Pietzker) is, ‘Before you speak, ask: Is it necessary? Is it kind? Is it true? Does it add to the silence?’ Although the source may be fake news, the sentiment is worth reminding people of, every now and then.

Don’t forget about the Cultural Differences

When your staff work in different countries or come from different cultural backgrounds, there can be bumps in the road to mutual understanding. Literally, for colleagues who don’t share the same first language. But little considerations can be put in place, to smooth the way to understanding.

Firstly, agreeing as a team that you’ll try to avoid using slang and colloquialisms will help avoid a lot of confusion. For technical terms, your team could curate a glossary that can be kept to hand during meetings, saving time on questions. Sending out as much material ahead of the meeting as possible is good, too. It helps those who have a different first language to follow on if they know roughly what subjects are going to come up.

Be Kind

You’ll probably have heard that remote teams are more productive. That’s (mostly) because staff is happier and healthier if they work from home. And do you know what else makes people happy and healthy? You got it! Kindness.

A research study by Harvard Business School & The University of British Columbia gave participants a small sum of money and told them to spend it either on themselves or someone else. Those that spent it on someone else reported that they were happier than those who’d indulged themselves. So it isn’t just the recipient of kindness who gets the warm & fuzzies, it’s the giver too.

In the meantime, in the words of two of the greatest influencers of our time, ‘Be excellent to each other, and party on, dudes.’

Source Prolead brokers usa

text annotations in the news industry
Text Annotations in the News Industry

In the media and communication industry, writers are frequently confronted with huge volumes of textual material. They are having significant difficulty extracting structured knowledge from these papers, and the text is being underutilized, perhaps leaving critical information unknown.

Machine learning techniques can assist, but they require a thorough understanding of the information required and manual annotation of the corpus. Before going further, let’s understand what annotation, types, and how it is helping machine learning models to perform accurately.

What are annotations?

Annotation is the process of labeling data which are in the form of image, video, text annotation, or object in order to use Machine Learning to train a model. In simple words, it is the process of transcribing, identifying, and labeling key characteristics in your data. These are the characteristics that you simply want your machine learning system to recognize on its own, with unannotated real-world data.

Annotation can assist in the cleaning up of a dataset. It has the ability to fill in any gaps that may exist. Annotation of data can be used to recover data that has been incorrectly labeled or has missing labels and replace it with new data for the Machine Learning model to utilize.

Types of Annotations

1. Text Annotation

2. Video Annotation

3. Image annotation

4. Named Entity Annotation

5. Audio Annotation

6. Semantic Annotation

7. Intent Annotation

8. Sentiment Annotation

Annotation of text in the media industry

The process of gathering, editing, and publishing newspaper stories is a complex and highly specialized task that frequently operates within specific publishing constraints. News isn’t necessarily written in a neutral tone; it might depart from the usual by employing certain vocabulary, a particular writing style, or a particular author’s point of view. Media bias, and news bias in the context of news stories, are terms used to describe certain qualities of the stories. To avoid news bias, accuracy, and balanced viewpoints have been emphasized in the context of news reporting, because news can have a large influence on readers, forming people’s viewpoints and attitudes toward social issues, and ultimately changing political views and society.

With such a huge amount of text data being used in the industry, annotating text and each sentence is a time-consuming and laborious task which raises the need for professional annotators who can correctly annotate the text.

How it is done

Data selection

First, the raw data set is collected from the internet. It is impossible to label every sentence in those articles. Instead, annotation companies use several methods to choose a subset of articles for each categorization challenge and then only labeled or annotate those subsets.

Data Processing

When data is collected and converted into useful information, it is called data processing. It should be corrected so that the end product, or data output, is not harmed. Missing values must be addressed, special characters must be removed, irrelevant phrases must be eliminated, and so on. The list could go on and on. A thorough and succinct exploratory data analysis (EDA) can reveal the issues that need to be addressed and lead the data preparation and cleaning process. Most HTML elements were removed and no further text processing was done, such as lower case, removing stop words, or even lemmatization or tokenization, because the sentences would have become hard to read, comprehend.

Data Labeling

The data that the models are trained with must be labeled data as correctly as possible to achieve the best possible prediction accuracy by the ML models afterward. As a result, it’s critical that those who label the data understand the categorization categories and how to give the relevant category to a sentence, i.e., how to accurately label the phrase.

Source Prolead brokers usa

about deep learning as subset of machine learning and ai
About Deep learning as subset of machine learning and AI

Deep learning has wide application in artificial intelligence and computer vision backed programs. Across the world, machine learning has added more value to a range of tasks using key methodologies of artificial intelligence such as natural language processing, artificial neural networks and mathematical logics. Off lately, deep learning has become central to machine learning algorithms which are required to do highly complex computation and handle gigantic data.

With a multi-layer neural architecture, deep learning has been solving multiple scenarios and presenting solutions that work. There are several deep learning methods which are actively applied in machine learning and AI.

Types of Deep learning methods for AI programs

1. Convolutional Neural Networks (CNNs): CNNs, also known as ConvNets, are multilayer neural networks that are primarily used for image processing and object detection.

2. Long Short Term Memory Networks (LSTMs): Long-term dependencies may be learned and remembered using LSTMs, which are a kind of Recurrent Neural Network (RNN). Speech recognition, music creation, and pharmaceutical development are all common uses for LSTMs.

3. Recurrent Neural Networks (RNNs): Image captioning, time-series analysis, natural-language processing, handwriting identification, and machine translation are all typical uses for RNNs.

4. Generative Adversarial Networks (GANs): GANs are deep learning generative algorithms that generate new data instances that are similar to the training data. GANs aid in the creation of realistic pictures and cartoon characters, as well as the creation of photos of human faces and the rendering of 3D objects.

5. Radial Basis Function Networks (RBFNs): They are used for classification, regression, and time-series prediction and have an input layer, a hidden layer, and an output layer.

6. Multilayer Perceptrons (MLPs): MLPs are a type of feedforward neural network that consists of many layers of perceptrons with activation functions.

7. Self Organizing Maps (SOMs): SOMs enable data visualization by using self-organizing artificial neural networks to decrease the dimensionality of data. SOMs are designed to assist consumers in comprehending this multi-dimensional data.

8. Deep Belief Networks (DBNs): DBNs are generative models with several layers of stochastic, latent variables. For image identification, video recognition, and motion capture data, Deep Belief Networks (DBNs) are employed.

9. Restricted Boltzmann Machines( RBMs): RBMs are stochastic neural networks that can learn from a probability distribution across a collection of inputs.

10. Autoencoders: It’s a sort of feedforward neural network where the input and output are both the same. Autoencoders are utilized in a variety of applications, including drug discovery, popularity prediction, and image processing.

Why does deep learning matter in AI implementation?

Deep learning models have larger and more specific hardware requirements. DL aids Artificial Intelligence (AI) systems achieve outcomes in prediction and classification tasks. Deep learning, a subtype of machine learning, employs artificial neural networks to carry out the machine learning computation. Deep learning enables machines to tackle complicated issues even when they are given a large, unstructured, and interconnected data set.

On the other hand, it’s no secret that AI programs require massive amounts of machine learning to predict accurately. The predictions work accurately if the data set used for training and ML model is well structured and labelled. Hence, the models and results in Ml are more data-intensive than what it is in deep learning.

Training data need in AI implementations

Training data is in focus when we talk about AI programs and implementation. Every artificial intelligence requires supervised or unsupervised learning to understand a given problem. Without the training data, it is unlikely that an AI program will produce any logical results. As a field AI makes use of both unstructured and structured or hybrid training in a variety of formats. Meanwhile, deep learning differs in terms of training data requirements, however, the calculation of the same must be based on layers of computation. Machine learning being dependent on data requires more training data which include both labelled text and image data for coming up with a model.

Summing up, deep learning or machine learning both are dependent on a certain level of data but deep learning can also function with unsupervised data labeling and are more computation-intensive.

Source Prolead brokers usa

e2808bhow data science and bi is revolutionizing the sports industry with power bi
​How Data Science and BI Is Revolutionizing the Sports Industry with Power BI

Nowadays, data has become very important in all industries, and that is why thousands of companies hailing from multiple sectors are resorting to data analytics tools. Using BI and data analysis tools can prove to be helpful for businesses in any sector, as it is. Even the sports sector can benefit a lot from the implementation of proper BI solutions and technologies. Businesses hailing from this sector can gain from hiring the right Power BI consulting services.

Why is using BI tools in the sports sector necessary?

These days, sports are not just about physical games. On the contrary, it is more like a numbers game. Whether it is basketball, baseball, football, or soccer- it involves big players and a huge amount of investment, eventually. In the last few years, sports sector entities have started deploying specialized Big Data Analytics tools. Deployment of artificial intelligence technologies and machine learning is impacting and altering the sports industry in unprecedented ways.

The sports sector companies are making great use of BI tools and data analysis systems to interpret statistical data. Analysis of such huge amounts of data and predictive analysis features in such tools can be beneficial for the athletes, coaches/trainers, and teams in the long run. 

The major sports organizations now use connected apps, cloud-based software, and these are fed data obtained by wearable devices, on-field cameras, and tracking gadgets. These devices are fitted on the accessories used by the players and also in the fields. The real-time game data is therefore collected and stored for analysis using specialized BI applications. 

How does the sports sector gain in many ways by using BI solutions?

  • Finding the root cause of performance dip– By using BI tools and apps, players can stay updated on their performance track record. A baseball or basketball league player, for example, can find out in which games he or she performed below the usual level or scored abysmally low. Then, it becomes easier to track down the factors that might have led to reduced output or performance. Once the reasons for performance deficit are found, these can be handled. 

  • Making more accurate game predictions– The data analysis tool and BI software applications are laden with advanced predictive analysis technologies. By analyzing a large volume of data collected over a span of time, power bi experts can make predictions on the outcome of upcoming matches and games, expected player performance, etc. To make such near-accurate predictions, the BI experts make use of diverse types of qualitative and quantitative data.

  • Helping the athletes to evade the risk of injuries– Big data can also be useful for aiding the athletes in evading in-field injuries. This is especially useful for players involved in high-intensity sports like lacrosse, football, and hockey. Using advanced miniatures like sensors and cameras, data is obtained on sports equipment that is unsafe or can lead to on-field injuries. Analyzed data can be used to find out specific playing styles, leading to serious injuries as well. This helps the players in evading injuries, eventually.

  • Assessing the suitability of players and coaches/trainers – The sports teams and clubs always want to hire the best performing players, and they also look for new players with untapped talent and potential. When they use cutting-edge BI and data analytics tools, it becomes easier to assess the suitability of various types of players. These tools can use algorithms and qualitative and quantitative data to figure out the prospective players per season. They can also hire suitable trainers and coaches by using these tools. 

It is not only about tracking the professional achievements of a coach or player. These tools are also used to gather data and analyze aspects like the history of objectionable behavior, criminal record, physical conditions not suited for sports, etc. 

  • Aiding the athletes to prepare/practice better- Sometimes, the sportsmen and athletes fail to perform as expected owing to a deficit in practice and preparation methods. The underlying factor leading to improper or inadequate practice can be hard to fathom at times. 

Sometimes, the sportsmen and athletes may use supplements and diets that are not ideal, leading to reduced stamina during the games. In some cases, it can be owing to the usage of unsuitable accessories like shoes, attire, or protective gear. Athletes may also fail to perform optimally if they do not adhere to the apt fitness regime.

Data analytics tools are useful for finding such inherent flaws in practice, and these can be modified accordingly.

  • Enhancing player safety in a pandemic situation- As the Covid 19 pandemic continues to rage on and the killer virus mutates into newer strains, sports sector entities are resorting to BI tools to ensure player safety. Many universities and schools are using such solutions to ensure player’s safety is not compromised. The sports authorities are using data analytics services to figure out nuances like vaccination status of players, health record since 2020, proximity to containment zone, etc., for the players. 

  • Developing a suitable strategy for specific games or matches– Sports teams and the coaches can gain from using advanced data analytics tools for devising the apt playing strategy for specific matches. For vital matches, they can use these tools to analyze quantitative data and learn in detail about the potential weaknesses of rival team players. So, this can enhance their winning prospects. 

Using data analytics tools can be beneficial for literally all types of athletes and sports sector entities. These include professional league clubs, school, and university-level sports organizations, and the organizations offering training in various types of sports. They can definitely gain by seeking the assistance of the veteran Microsoft Power BI developers. 

Summing it up

Sports sector entities of varying types can gain in various ways by deploying suitable BI and data analytics solutions. By using such tools, they get insights on players, trainers, and the risk factors and make more accurate predictions. However, to ensure they can make the best use of such applications, they have to hire suitable BI and data analysis professionals. While there are several such solutions, hiring a veteran Microsoft Power BI service agency is advisable.

Source Prolead brokers usa

improving shipment and orders defense visual streams logistics
Improving Shipment and Orders: Defense Visual Streams Logistics

Defense logistics (DL) is a significant and mostly untapped knowledge area in the field of Production Engineering, despite its importance. As a result, this article aims to define the domain of the DL issue. The emphasis is on industrial, technical, institutional, organizational, and, in particular, strategic management elements of logistics in the military industry as they are implemented in practice. The paper also suggests an organizational structure that identifies the goals of DL, their functional domains, and their interactions with the environment. The Defense Logistics Base (DLB) is defined in the framework as a system that is intended to develop and sustain military capability but is also involved in the development of industrial capability, particularly in the areas of high and medium-high technologies applied to high-value products with dual applications. Also included is a study agenda for future work on strategic management linked to DL, which can be found here.

A total of 5,000 Freedom of Information Act requests are received by the Defense Logistics Agency, which provides logistical assistance to the military every year, or about 100 requests each week.

In military logistics, reaction times, demand unpredictability, a wide range of material references, and cost-effectiveness all play a role in determining overall fighting capabilities. Capacity and efficiency of delivery are necessary for its procedures since it is considered to be the link between deployed troops and the industrial base, which supplies the goods and services that the forces need to complete their mission successfully. Supply Chain Management (SCM) must be improved to reduce delivery lead times to meet the necessary level of flexibility.

This cost, on the other hand, adds to the strain on the economy. Although it is not a need for national security, it cannot be avoided. Furthermore, the equipment’s maintenance and repair creates an additional economic burden, and contrary to popular belief, the cost of maintaining and repairing the equipment may often be greater than the original cost of purchasing the equipment. As a result, the military logistics department is always searching for methods to reduce these expenses. Performance-based logistics has played a critical role in this respect, and it continues to do so.

Defense logistics solutions – A conceptual framework for thinking about them

Defense Logistics Solutions’ experience in the field of defense logistics is unrivaled in the industry as a top logistics service provider. Project and defence freight are two areas in which they have decades of expertise. Their skills include the transportation of defence equipment, the distribution of relief supplies, the delivery of oil tankers to distant areas, and the acceptance of difficult missions that are only accepted by a few service providers. Let’s take a look at why you should improve shipment and orders

1. Management of the Supply Chain

Business leaders are looking for ways to decrease production lead times and improve collaboration with suppliers as global competition grows fiercer. Supply chain management services from a vendor to the consignor that is efficient and multimode will assist you in keeping your production, buying, and distribution operations in sync.

2. Order and shipment management can improve

Supply chain management services solutions, which make use of the physical infrastructure, personnel knowledge, and a small group of core carriers, enable customers to enhance order and shipment management, as well as boost tracking, storage, assets, and labor efficiency, among other things.

You will have the opportunity to map the visual stream of the present and future condition of the supply chain at these Logistics Solutions.

3. Transportation Planning, Manufacturing Planning, and Other Services

Its service extends beyond supply chain management to include a variety of other services. Transportation planning, inventory management, production planning, and resource reallocation are all aspects of the process

4. The one, dependable source for all of your supply chain requirements

The transportation solution is the most dependable on the market since it uses cutting-edge technology, has a complete supply chain management portfolio, and has a global network of distribution centers.

Defense logistics solutions bring together the procedures, technology, and experience gained over a decade of providing logistical assistance to multinational businesses, establishing it as the global supply chain company to turn to when you need a single, all-in-one supplier for your supply chain requirements.

Source Prolead brokers usa

another gift of ai to the future robotic bees for pollination a boon or bane
Another Gift of AI to the future, Robotic Bees for pollination: A boon or bane?

Thanks to Artificial Intelligence what all remained confined to our fantasies, dreams, and sci-fi movies is turning into reality! Advances in this domain are awe-inspiring. Moreover, they are proving helpful for start-ups to develop newer and better ways to micro and macro manage all kinds of tasks.

The most exciting part is that AI offers opportunities for real world problems like climate change, waste management, assistive surgeries, reducing carbon footprint – to name a few.
It doesn’t stop at that! These topics further find more sub-topics and impressive solutions that a wonderfully intelligent tech is bringing to life – to aid humans, to improve lives!

What must we know about the RoboBees?

One such feat is artificial pollination! Yes, you read that right – artificially intelligent pollinators can help farmers and bees, both in the coming future!
Scientists and researchers have been working to develop such amazing little mechanical creatures for years now. 

With the aim of creating a robotic bee colony and knowing its basic fundamentals, the RoboBee project was launched in 2009 – to conduct early robotic fly experiments. It was preceded by the DelFly Project, which started in 2005. 

Flying micro-robotics have been a keen area of interest for researchers for quite some time now. The goals associated with robotics bees (or micro-robots) are the following:

  • mimicking insect flights and rapid wing movement
  • to aid pollination artificially
  • surveillance
  • successful communication
  • search and rescue, etc.

If such pollinators get developed, they would promote pollination and consequently aid the farmers wonderfully too! The only issue is, though we are treading on such a path, we still don’t have models which are practical.

What are the major concerns?

Any technological advancement, evidently, takes time. Yes, it offers fantastic solutions, but the process is painstakingly difficult too! For instance, micro-robotics have immense potential undoubtedly, but a practically feasible pollinator hasn’t been developed till date. 

There are several factors that come into play – developing a tiny machine requires a lot of skill, subject expertise and intelligence. After all, it is complex to create a miniature version that is mechanical and incorporates the laws of physics too.

Researchers and engineers developing even smaller versions of a previously developed device can’t consider the predecessor for guidance. But, why? When dealing with a small device, the nature of the forces at play doesn’t remain the same. Hence, building a tiny pollinator though promising, is a Herculean task!

Another hurdle is the viability of artificial pollinators. How would they sustain? How will they get charged or have a power supply? Studies and research are being conducted to find solutions to these problems.

The third and most important question revolves around making these artificial insects intelligent – just like the real ones. Yes, AI, ML aim to do the same; but RoboBees are yet to get there. How they will make decisions like wasps, bees and their likes is a tough nut to crack – as the decision-making bit plays a crucial role in pollinating plants. 

In Conclusion

We all know bees are mandatory for crops – they make pollination successful and that’s how plants bear fruit. However, with their decreasing population, environmental concerns are further rising. To take care of such concerns, researchers have developed a robot bee drone that uses GPS, a high-resolution camera, AI, etc. The aim is to – take care, intelligently and in a modern way!

Clearly, AI won’t fail to impress – be it today, tomorrow or forever! We must remember to explore the new opportunities and ways to use technology to our advantage without harming the environment. That’s one of the primary reasons for developing mechanical insects like the robotic bees!

With innovation and continuous efforts to their aid, artificially intelligent tools will provide solutions for more complex problems. Thus, we all have a great deal to look forward to and get inspired from!

Source Prolead brokers usa

the inverse problem in random dynamical systems
The Inverse Problem in Random Dynamical Systems

We are dealing here with random variables recursively defined by Xn+1 = g(Xn), with X1 being the initial condition. The examples discussed here are simple, discrete and one-dimensional: the purpose is to illustrate the concepts so that it can be understood and useful to a large audience, not just to mathematicians. I wrote many articles about dynamical systems, see for example here. The originality in this article is that the systems discussed are now random, as X1 is a random variable. Applications include the design of non-periodic pseudorandom number generators, and cryptography. Also, such systems, especially more complex ones such as fully stochastic dynamical systems, are routinely used in financial modeling of commodity prices.

We focus on mappings g on the fixed interval [0, 1]. That is, the support domain of Xn is [0, 1], and g is a many-to-one mapping onto [0,1]. The most trivial example, known as the dyadic or Bernoulli map, is when g(x) = 2x – INT(2x) = { 2x } where the curly brackets represent the fractional part function (see here). This is sometimes denoted as g(x) = 2x mod 1. The most well-known and possibly oldest example is the logistic map (see here) with g(x) = 4x(1 – x).

We start with a simple exercise that requires very little mathematical knowledge, but a good amount of out-of-the-box thinking. The solution is provided. The discussion is about a specific, original problem, referred to as the inverse problem, and introduced in section 2. The reasons for being interested in the inverse problem are also discussed. Finally, I provide an Excel spreadsheet with all my simulations, for replication purposes.

1. The standard problem

One of the main problems in dynamical systems is to find if the distribution of Xn converges, and find the limit, called invariant measure, invariant distribution, fixed-point distribution, or attractor. The attractor, depending on g, is typically the same regardless of the initial condition X1, except for some special initial conditions causing problems (this set of bad initial conditions has Lebesgue measure zero, and we ignore it here). As an example, with the Bernoulli map g(x) = { 2x }, all rational numbers (and many other numbers) are bad initial conditions. They are however far outnumbered by good initial conditions. It is typically very difficult to determine if a specific initial condition is a good one. Proving that π/4 is a good initial condition for the Bernoulli map would be a major accomplishment, making you instantly famous in the mathematical community, and proving that the digits of π in base 2, behave exactly like independently and identically distributed Bernoulli random variables. Good initial conditions for the Bernoulli map are called normal numbers in base 2.

It is also assumed that the dynamical system is ergodic: all systems investigated here are ergodic; I won’t elaborate on this concept, but the curious, math-savvy reader can check the meaning on Wikipedia. Finding the attractor is a difficult problem, and it usually requires solving a stochastic integral equation. Except in rare occasions (discussed here and in my book, here), no exact solution is known, and one needs to use numerical methods to find an approximation. This is illustrated in section 1.1., with the attractor found (approximately) using simulations in Excel. In section 2., we focus on the much easier inverse problem, which is the main topic of this article.

1.1. Standard problem: example

Let’s start with X1 defined as follows: X1 = U / (1 – U)^α, where U is a uniform deviate on [0, 1], α = 0.25, and ^ denotes the power operator (2^3 = 8). We use g(x) = { 4x(1 – x) }, where { } denotes the fractional part function. Essentially, this is the logistic map. I produced 10,000 deviates for X1, and then applied the mapping g iteratively to each of these deviates, up to Xn with n = 53. The scatterplot below represents the empirical percentile distribution function (PDF), respectively for X3 in blue, and X53 in orange. These PDF’s, for X2, X3, and so on, slowly converge to a limit, corresponding to the attractor. The orange S-curve (n = 53) is extremely close to the limiting PDF, and additional iterations (that is, increasing n) barely provide any change. So we found the limit (approximately) using simulations. Note that the cumulative distribution function (CDF) is the inverse of the PDF. All this was done with Excel alone.

2. The inverse problem

The inverse problem consists of finding g, assuming the attractor distribution (the orange curve in the above figure) is known. Typically, there are many possible solutions. One of the possible reasons for solving the inverse problem is to get a sequence of random variables X1, X2, and so on, that exhibits little or no auto-correlations. For instance, the lag-1 auto-correlation (between Xn and Xn+1) for the Bernoulli map, is 1/2, which is way too high depending on the applications you have in mind. It is important in cryptography applications to remove these auto-correlations. The solution proposed here also satisfies the following property: X2 = g(X1), X3 = g(X2), X4 = g(X3) and so on, all have the same pre-specified attractor distribution, regardless of the (non-singular) distribution of X1

2.1. Exercise

Before diving into a solution, if you have time, I ask you to solve the following simple inverse problem. 

Find a mapping g such that if Xn+1 = g(Xn), the attractor distribution is uniform on [0, 1]. Can you find one yielding very low auto-correlations between the successive Xn‘s? Hint: g may not be continuous. 

2.2. A general solution to the inverse problem

A potential solution to the problem in section 2.1 is g(x) = { bx } where b is an integer larger than 1. This is because the uniform distribution on [0, 1] is the attractor for this map. The case b = 2 corresponds to the Bernoulli map discussed earlier. Regardless of b, INT(bXn) represents the n-th digit of X1, in base b. The lag-1 autocorrelation between Xn and Xn+1, is then equal to 1 / b. Thus, the higher b, the better. Note that if you use Excel for simulations, avoid even integer values for b, as Excel has an internal glitch that will make your simulations meaningless after n = 45 iterations or so. 

Now, a general solution offered here, for any pre-specified attractor and any non-singular distribution for X1, is based on a result proved here. If g is the solution in question, then all Xn (with n  >  1) have the same distribution as the pre-specified attractor. I provide an Excel spreadsheet showing how it works for a specific example.

First, let’s assume that g* is a solution when the attractor is the uniform distribution on [0, 1]. For instance g*(x) = { bx } as discussed earlier. Let F be the CDF of the target attractor, and assume its support domain is [0, 1]. Then a solution g is given by

For instance, if F(x) = x^2, with x in [0, 1], then g(x) = SQRT( { bx^2 } ) works, assuming b is an integer larger than 1. The scatterplot below shows the empirical CDF of X2 (blue dots, based on 10,000 deviates) versus the CDF of the target attractor with distribution F (red curve): they are almost indistinguishable. I used b = 3, and for X1, I used the same distribution as in section 1.1. The detailed computations are available in my spreadsheet, here (13 MB download).

The summary statistics and the above plot are found in columns BD to BH, in my spreadsheet.

To receive a weekly digest of our new articles, subscribe to our newsletter, here.

About the author:  Vincent Granville is a data science pioneer, mathematician, book author (Wiley), patent owner, former post-doc at Cambridge University, former VC-funded executive, with 20+ years of corporate experience including CNET, NBC, Visa, Wells Fargo, Microsoft, eBay. Vincent is also self-publisher at DataShaping.com, and founded and co-founded a few start-ups, including one with a successful exit (Data Science Central acquired by Tech Target). You can access Vincent’s articles and books, here.

Source Prolead brokers usa

the acceleration of the move to the cloud whats next for data strategy
The Acceleration of the Move to the Cloud – What’s Next for Data Strategy?

The migration to the cloud has accelerated considerably since the start of the COVID-19 pandemic. Analysts at IDC predicted that by the end of 2021, 80% of enterprises will have moved from on-premise data centers to cloud service providers and platforms. That’s because the cloud provides huge benefits, especially for a remote- and distributed workforce, and the means to scale and future-proof businesses in an environment of massive change. Moving to the cloud isn’t an option anymore –it’s essential – but it also introduces a whole new set of challenges and opportunities from a data strategy perspective.

So, what’s next for enterprises looking to make the most of their organization’s data in this cloud-first landscape?

Adapting to the new normal. Disrupting the old normal.

IDC expects the volume of data that businesses generate to grow by 61%, and to 175 zettabytes by 2025. Traditional on-premises IT architectures don’t have the capacity or processing power to handle such vast amounts. Organizations need a more agile way to access their data, adapt to unforeseen situations and give themselves room to grow, which is part of the cloud’s appeal.

The cloud has also proven itself as a far better platform from which to collaborate, especially as COVID amplified disruptions in the way we live and work. For instance, more remote work has increased the use of collaboration tools like Slack or Zoom that run on cloud providers like AWS and Oracle. Or consider how much easier it is to work with colleagues on the same cloud-native documents in Google Workplace rather than sending documents in Word, PowerPoint, or Excel back and forth on email.

Again, as more services have gone online, accumulating more data, access to that data has become even more important to ensure that organizations effectively operate and serve their customers. The cloud enables businesses to work faster, and do more, with more data. Moving to the cloud has become a way of staying ahead of the competition, and for many businesses, a means of survival – but all this accessible data does require a different approach.

The power to get more from your data.

The most impactful driver of migration to the cloud is the way cloud technology increases the value organizations get from their data. The huge capacity, power and flexibility of cloud services enables you to gather, manage and analyze huge amounts, and types of data that were previously difficult or impossible to access, thereby delivering better insights to drive decision-making.

The increasing volume of data derives from an ever-growing array of sources. There’s structured data found in databases such as spreadsheets and SQL databases. There’s unstructured data, like text, email, images, audio, video, sensors, and more. It’s estimated that by 2025,  80% of data will be unstructured. Until the rise of the cloud, this data was often too heavy, complex, and varied to store and analyze. Now, the cloud can handle it. It offers enormous opportunities to add value to business by enabling them to get intelligence from this huge, often untapped mass of data.

AWS, for example, provides users with high performance cloud data systems and comprehensive cloud data management and analytics services. They enable organizations to modernize their data architectures and create new business value. Combined with the power of a growing number of data/analytics platforms available today, organizations can manage and analyze the vast array of data generated and stored across all of AWS’ suite of cloud services and deliver insights that benefit every user and every team.

Embedded analytics to meet the promise of the cloud.

Just because more data is available via the cloud does not mean that everyone across an organization is constantly extracting value from it. In fact, analytic adoption is still relatively low – especially for lines of business outside of IT – and this data overload brought on by the cloud may only intimidate and deter them from attempting to get started. Too much data means too many possible insights and it can be hard to know where to begin.

But breaking down this analytic adoption barrier is critical for organizations looking to become truly data driven. That’s why the most innovative organizations are starting to take advantage of embedded analytics that provide insights to business users where they are already spending their time (traditionally, they would need to leave their workflow and analyze a dashboard to attempt a data-driven decision). Now, embedded analytics technology can infuse actionable insights directly in the collaboration apps people are already using without disrupting any workflow.

A user might type in a question to Slack, for example, and the embedded analytics (fueled by ML/AI) can deliver the answer immediately within the Slack interface. In this case, extracting value from all that business data becomes easy, or even mindless, without disrupting the usual workflow. No more intimidation, just immediate insights that can drive impactful business decisions – and better yet, all cloud-based.

Moving to the cloud is a compelling proposition with game-changing potential, which is why so many enterprises will make the shift by the end of the year. But all that accessible data brought on by the cloud will not guarantee a data-driven enterprise on its own. Combining the cloud with data strategies that encourage analytic adoption across the organization, such as leveraging embedded analytics, is what will truly send your organization into the stratosphere.

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA
error: Content is protected !!