Digital Twins for Data-Driven Maintenance by UReasonUReason
Apply Digital Twin technology to realize data-driven maintenance by creating digital copies of your assets & processes and run real-time predictive analytics on them in a digitally secure way.
Cloud computing bukanlah satu bagian dari teknologi seperti microchip atau telepon genggam. Sebaliknya, ini merupakan sebuah sistem yang utamanya terdiri dari tiga layanan: software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), dan platform-as-a-service (PaaS).
The presentation gives an overview of the reasons for implementing a Manufacturing Intelligence strategy and how to justify the investment. Topics covered include:
-Manufacturing Intelligence Overview
-Business Drivers for Implementing a MI project
-What Data are we looking for?
-Developing the Business Case
-Execution Strategies for Success
-Some Challenges
PLM (Product Lifecycle Management) is the next level of lean thinking that applies lean principles to the entire product lifecycle through an enterprise system. PLM utilizes information technology to facilitate collaboration and information sharing across the organization. It aims to optimize processes from the beginning by substituting physical testing with virtual simulations. Current major PLM systems include Siemens NX, Dassault CATIA, Autodesk PLM 360, PTC PLM, and Aras Innovator, which are primarily commercial desktop or cloud-based systems that integrate with CAD/CAM software.
Smarter Manufacturing through Equipment Data-Driven Application DesignKimberly Daich
Author relates a number of specific Smart Manufacturing objectives to the applications required to achieve them and show how the standards-based equipment models directly support their respective algorithms. By Alan Weber of Cimetrix, Inc and Mark Reath from Globalfoundries
Predix Builder Roadshow event content detailing the Industrial Internet of Things, Building the Digital Twin, Predix Edge Essential, Predix Dojo Program, and upcoming Predix events.
Miguel Angel Perdiguero - Head of BIG data & analytics Atos Iberia - semanain...COIICV
This document discusses Industry 4.0 and the digital transformation of industry. It describes key technological pillars like the Internet of Things, additive manufacturing, and data analytics. It provides examples of how these technologies can be applied through predictive maintenance and customized products. The document also introduces Atos Codex, an open industrial analytics platform that uses big data, high performance computing, and machine learning to deliver business insights and solutions.
Digital Twins for Data-Driven Maintenance by UReasonUReason
Apply Digital Twin technology to realize data-driven maintenance by creating digital copies of your assets & processes and run real-time predictive analytics on them in a digitally secure way.
Cloud computing bukanlah satu bagian dari teknologi seperti microchip atau telepon genggam. Sebaliknya, ini merupakan sebuah sistem yang utamanya terdiri dari tiga layanan: software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), dan platform-as-a-service (PaaS).
The presentation gives an overview of the reasons for implementing a Manufacturing Intelligence strategy and how to justify the investment. Topics covered include:
-Manufacturing Intelligence Overview
-Business Drivers for Implementing a MI project
-What Data are we looking for?
-Developing the Business Case
-Execution Strategies for Success
-Some Challenges
PLM (Product Lifecycle Management) is the next level of lean thinking that applies lean principles to the entire product lifecycle through an enterprise system. PLM utilizes information technology to facilitate collaboration and information sharing across the organization. It aims to optimize processes from the beginning by substituting physical testing with virtual simulations. Current major PLM systems include Siemens NX, Dassault CATIA, Autodesk PLM 360, PTC PLM, and Aras Innovator, which are primarily commercial desktop or cloud-based systems that integrate with CAD/CAM software.
Smarter Manufacturing through Equipment Data-Driven Application DesignKimberly Daich
Author relates a number of specific Smart Manufacturing objectives to the applications required to achieve them and show how the standards-based equipment models directly support their respective algorithms. By Alan Weber of Cimetrix, Inc and Mark Reath from Globalfoundries
Predix Builder Roadshow event content detailing the Industrial Internet of Things, Building the Digital Twin, Predix Edge Essential, Predix Dojo Program, and upcoming Predix events.
Miguel Angel Perdiguero - Head of BIG data & analytics Atos Iberia - semanain...COIICV
This document discusses Industry 4.0 and the digital transformation of industry. It describes key technological pillars like the Internet of Things, additive manufacturing, and data analytics. It provides examples of how these technologies can be applied through predictive maintenance and customized products. The document also introduces Atos Codex, an open industrial analytics platform that uses big data, high performance computing, and machine learning to deliver business insights and solutions.
This document discusses best practices for migrating distributed control systems (DCS). It covers why migrations are necessary due to issues like aging systems, loss of support, and high maintenance costs. Selection criteria for new systems include taking advantage of new technologies, long-term supplier support, accommodating advanced applications, and minimizing costs, risks, and downtime. Common migration approaches like bulldozing, cabling solutions, transition solutions, and I/O replacement are presented along with their benefits and challenges. Critical implementation guidelines emphasize planning with clear objectives and timeframes, using standards, involving operations teams early, and preparing with a project timeframe shorter than the outage period.
ARC Advisory Group's 2014 European Industry Forum in the Netherlands included this interesting presentation from Willem Hazenberg of Stork on control system migration.
Next generation business automation with the red hat decision manager and red...Masahiko Umeno
Red Hat offers the Decision Manager and Process Automation Manager to enable next generation business automation. The key pillars of their solution are application modernization, robotic process automation, IoT, AI, and business optimization. For successful application projects, companies should focus on the application architecture, organizing rules and processes, and using an iterative software development methodology. The Process Automation Manager supports business process management with capabilities like case management, while the Decision Manager is used for managing rules.
Digital Twin Technology: Function, Significance, and Benefitsemilybrown8019
Digital twin technology creates a virtual model of a physical system. It uses data from sensors on physical assets along with historical and real-time data to simulate the physical system. This allows companies to test scenarios, monitor assets remotely, and predict maintenance needs. Some key benefits are faster production, improved customer service, reduced costs, and greater operational efficiencies.
Proactive Services Through Insights and IoT by M. CaponeCapgemini
Measurable Business Cases for IoT in the Value Chain
IoT is changing the relationship between the customer and the company and enabling companies to develop new business models, but there are also many valuable applications
for IoT in the value chain, which can have immediate and measurable impacts on the bottom line. Capgemini shows the measurable use cases for IoT in the supply chain, product
development, production, logistics, and service.
Speaker:
Michael Capone,
Principal Business Analyst, Digital Customer Experience, Capgemini
Amidst an industry cloud of confusion about what “AIOps” is and what it can do, these slides--based on the webinar from EMA research--delineates a clear path to victory for business and IT stakeholders seeking to use machine learning to optimize the performance of critical business services.
The document discusses cross-chain collaboration centers that can automate and optimize business processes across organizations. It notes the challenges of disparate systems and lack of access to up-to-date information. The vision is described as a world where enterprises can run their business effectively without IT constraints, where the business process is the application. A cross-chain collaboration center is proposed as a software solution that can automate, integrate, synchronize and implement business processes by generating applications from a business process model and solution library templates.
I4MS Talks: Augmented reality in service operationsIrina Frigioiu
On the I4MS Talk of the 27th of April 2021 Valerio Alessandroni, I4MS Ambassador talked about the importance of AR solutions and the benefits and potential to improve workforce. An AR-enabled workforce can perform complex operations with a small initial knowledge. Paperless shop floors, optimized process definitions, and intelligent work instructions can propel organizations forward in their digital transformation journeys, boosting them ahead of the competition and into a new era of manufacturing.
ARC's Greg Gorbach's Global Manufacturing Presentation at ARC's 2008 Industry...ARC Advisory Group
The document discusses trends in operations management (OM) and manufacturing execution systems (MES) that are driving manufacturers to invest more in plant floor IT systems. It outlines the business requirements facing manufacturers today, such as globalization, rapid innovation, compliance, and real-time performance needs. However, older manufacturing systems have not kept pace with these demands. The document then examines three pillars of modern OM solutions: infrastructure, connectivity, and functions/processes. It provides examples of key OM functions like production planning, execution, and product data tracking that new systems must support.
This document summarizes the implementation of Demantra as a replacement forecasting tool for an education technology company's legacy Manugistics system. Key points:
- The legacy system was outdated, at end of life, and presented business risks. Demantra was selected after an evaluation process to provide an integrated supply chain planning solution.
- The project involved upgrading the existing Oracle Advanced Supply Chain Planning (ASCP) infrastructure for stability. Phase I implemented Demantra for sales forecasting. Phase II added real-time sales and operations planning capabilities.
- Challenges included convincing users to change and introducing the new software. The implementation enabled more accurate forecasting, reduced manual work, and integrated supply chain
Woodward, Inc. implemented an Industrial Internet of Things (IIoT) solution using PTC's ThingWorx platform to address challenges around lack of process for acting on operational machine data, manual tasks, and database synchronization issues. The solution involved leveraging existing PTC technologies and developing a Manufacturing Information System (MIS) integrated with manufacturing equipment. Benefits included improved training certification tracking, calibration compliance, access to work instructions, and more informed decision making through analytics. This led to enhanced quality, efficiency, and productivity.
ABC Manufacturing used ThingWorx Analytics to automate advanced analytics on factory data and gain real-time operations visibility. Challenges of large, complex data volumes and inability to use real
VOLTRIO SOLUTIONS PVT LTD is a automation product engineering service organiz...voltriosolutions
To accelerate development of your IOT solutions and products we provide Specialized engineering services across the entire IOT development cycle from consulting, device engineering, cloud and mobility application development, data analytics, and support & maintenance.
We are having functional expertise and competency to provide standardized solutions for industrial automation and industry 4.0.
Delivering tailor-made solutions that users can simply ease into with unparalleled expertise in design engineering, factory testing and commissioning of DCS, SCDA and PLC system has created benchmark and gained expert in oil and gas, pharmaceutical industry, automotive manufacturing sector, food & beverages, petrochemical and chemical industry.
Distributed Trace & Log Analysis using MLJorge Cardoso
The field of AIOps, also known as Artificial Intelligence for IT Operations, uses advanced technologies to dramatically improve the monitoring, operation, and troubleshooting of distributed systems. Its main premise is that operations can be automated using monitoring data to reduce the workload of operators (e.g., SREs or production engineers). Our current research explores how AIOps – and many related fields such as deep learning, machine learning, distributed traces, graph analysis, time-series analysis, sequence analysis, advanced statistics, NLP and log analysis – can be explored to effectively detect, localize, predict, and remediate failures in large-scale cloud infrastructures (>50 regions and AZs) by analyzing service management data (e.g., distributed traces, logs, events, alerts, metrics). In particular, this talk will describe how a particular monitoring data structure, called distributed traces, can be analyzed using deep learning to identify anomalies in its spans. This capability empowers operators to quickly identify which components of a distributed system are faulty.
AIOps: Anomalies Detection of Distributed TracesJorge Cardoso
Introduction to the field of AIOps. large-scale monitoring, and observability. Provides an example illustrating how Deep Learning can be used to analyze distributed traces to reveal exactly which component is causing a problem in microservice applications.
Presentation given at the National University of Ireland, Galway (NUI Galway)
on 2019.08.20.
Thanks to Prof. John Breslin
The Innovative Service Platform for Small and Medium Manufacturing CompanyHatio, Lab.
We are preparing for new SaaS based service for Small and Medium Manufacturing Companies.
This is just a draft of the advanced planning document.
Any recommendations and comments are welcome.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Contenu connexe
Similaire à UReason - Webinar Digital Twins in Data-Driven Maintenance
This document discusses best practices for migrating distributed control systems (DCS). It covers why migrations are necessary due to issues like aging systems, loss of support, and high maintenance costs. Selection criteria for new systems include taking advantage of new technologies, long-term supplier support, accommodating advanced applications, and minimizing costs, risks, and downtime. Common migration approaches like bulldozing, cabling solutions, transition solutions, and I/O replacement are presented along with their benefits and challenges. Critical implementation guidelines emphasize planning with clear objectives and timeframes, using standards, involving operations teams early, and preparing with a project timeframe shorter than the outage period.
ARC Advisory Group's 2014 European Industry Forum in the Netherlands included this interesting presentation from Willem Hazenberg of Stork on control system migration.
Next generation business automation with the red hat decision manager and red...Masahiko Umeno
Red Hat offers the Decision Manager and Process Automation Manager to enable next generation business automation. The key pillars of their solution are application modernization, robotic process automation, IoT, AI, and business optimization. For successful application projects, companies should focus on the application architecture, organizing rules and processes, and using an iterative software development methodology. The Process Automation Manager supports business process management with capabilities like case management, while the Decision Manager is used for managing rules.
Digital Twin Technology: Function, Significance, and Benefitsemilybrown8019
Digital twin technology creates a virtual model of a physical system. It uses data from sensors on physical assets along with historical and real-time data to simulate the physical system. This allows companies to test scenarios, monitor assets remotely, and predict maintenance needs. Some key benefits are faster production, improved customer service, reduced costs, and greater operational efficiencies.
Proactive Services Through Insights and IoT by M. CaponeCapgemini
Measurable Business Cases for IoT in the Value Chain
IoT is changing the relationship between the customer and the company and enabling companies to develop new business models, but there are also many valuable applications
for IoT in the value chain, which can have immediate and measurable impacts on the bottom line. Capgemini shows the measurable use cases for IoT in the supply chain, product
development, production, logistics, and service.
Speaker:
Michael Capone,
Principal Business Analyst, Digital Customer Experience, Capgemini
Amidst an industry cloud of confusion about what “AIOps” is and what it can do, these slides--based on the webinar from EMA research--delineates a clear path to victory for business and IT stakeholders seeking to use machine learning to optimize the performance of critical business services.
The document discusses cross-chain collaboration centers that can automate and optimize business processes across organizations. It notes the challenges of disparate systems and lack of access to up-to-date information. The vision is described as a world where enterprises can run their business effectively without IT constraints, where the business process is the application. A cross-chain collaboration center is proposed as a software solution that can automate, integrate, synchronize and implement business processes by generating applications from a business process model and solution library templates.
I4MS Talks: Augmented reality in service operationsIrina Frigioiu
On the I4MS Talk of the 27th of April 2021 Valerio Alessandroni, I4MS Ambassador talked about the importance of AR solutions and the benefits and potential to improve workforce. An AR-enabled workforce can perform complex operations with a small initial knowledge. Paperless shop floors, optimized process definitions, and intelligent work instructions can propel organizations forward in their digital transformation journeys, boosting them ahead of the competition and into a new era of manufacturing.
ARC's Greg Gorbach's Global Manufacturing Presentation at ARC's 2008 Industry...ARC Advisory Group
The document discusses trends in operations management (OM) and manufacturing execution systems (MES) that are driving manufacturers to invest more in plant floor IT systems. It outlines the business requirements facing manufacturers today, such as globalization, rapid innovation, compliance, and real-time performance needs. However, older manufacturing systems have not kept pace with these demands. The document then examines three pillars of modern OM solutions: infrastructure, connectivity, and functions/processes. It provides examples of key OM functions like production planning, execution, and product data tracking that new systems must support.
This document summarizes the implementation of Demantra as a replacement forecasting tool for an education technology company's legacy Manugistics system. Key points:
- The legacy system was outdated, at end of life, and presented business risks. Demantra was selected after an evaluation process to provide an integrated supply chain planning solution.
- The project involved upgrading the existing Oracle Advanced Supply Chain Planning (ASCP) infrastructure for stability. Phase I implemented Demantra for sales forecasting. Phase II added real-time sales and operations planning capabilities.
- Challenges included convincing users to change and introducing the new software. The implementation enabled more accurate forecasting, reduced manual work, and integrated supply chain
Woodward, Inc. implemented an Industrial Internet of Things (IIoT) solution using PTC's ThingWorx platform to address challenges around lack of process for acting on operational machine data, manual tasks, and database synchronization issues. The solution involved leveraging existing PTC technologies and developing a Manufacturing Information System (MIS) integrated with manufacturing equipment. Benefits included improved training certification tracking, calibration compliance, access to work instructions, and more informed decision making through analytics. This led to enhanced quality, efficiency, and productivity.
ABC Manufacturing used ThingWorx Analytics to automate advanced analytics on factory data and gain real-time operations visibility. Challenges of large, complex data volumes and inability to use real
VOLTRIO SOLUTIONS PVT LTD is a automation product engineering service organiz...voltriosolutions
To accelerate development of your IOT solutions and products we provide Specialized engineering services across the entire IOT development cycle from consulting, device engineering, cloud and mobility application development, data analytics, and support & maintenance.
We are having functional expertise and competency to provide standardized solutions for industrial automation and industry 4.0.
Delivering tailor-made solutions that users can simply ease into with unparalleled expertise in design engineering, factory testing and commissioning of DCS, SCDA and PLC system has created benchmark and gained expert in oil and gas, pharmaceutical industry, automotive manufacturing sector, food & beverages, petrochemical and chemical industry.
Distributed Trace & Log Analysis using MLJorge Cardoso
The field of AIOps, also known as Artificial Intelligence for IT Operations, uses advanced technologies to dramatically improve the monitoring, operation, and troubleshooting of distributed systems. Its main premise is that operations can be automated using monitoring data to reduce the workload of operators (e.g., SREs or production engineers). Our current research explores how AIOps – and many related fields such as deep learning, machine learning, distributed traces, graph analysis, time-series analysis, sequence analysis, advanced statistics, NLP and log analysis – can be explored to effectively detect, localize, predict, and remediate failures in large-scale cloud infrastructures (>50 regions and AZs) by analyzing service management data (e.g., distributed traces, logs, events, alerts, metrics). In particular, this talk will describe how a particular monitoring data structure, called distributed traces, can be analyzed using deep learning to identify anomalies in its spans. This capability empowers operators to quickly identify which components of a distributed system are faulty.
AIOps: Anomalies Detection of Distributed TracesJorge Cardoso
Introduction to the field of AIOps. large-scale monitoring, and observability. Provides an example illustrating how Deep Learning can be used to analyze distributed traces to reveal exactly which component is causing a problem in microservice applications.
Presentation given at the National University of Ireland, Galway (NUI Galway)
on 2019.08.20.
Thanks to Prof. John Breslin
The Innovative Service Platform for Small and Medium Manufacturing CompanyHatio, Lab.
We are preparing for new SaaS based service for Small and Medium Manufacturing Companies.
This is just a draft of the advanced planning document.
Any recommendations and comments are welcome.
Similaire à UReason - Webinar Digital Twins in Data-Driven Maintenance (20)
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
UReason - Webinar Digital Twins in Data-Driven Maintenance
1. Digital Twins for Data Driven Maintenance?
Insights in (20) minutes + Q&A
Speaker : Jules Oudmans
@UReason : Responsible for APM Deliveries
Background : Applied Physics, Mechanical Engineering & Computer Science
AI/Ind 4.0 : >25 Years/ > 8 Years
Motto : “Digital, What Else?“
2. About
Us Software
APM: On-Device, Edge, Cloud
20+ Years in Business
Monitoring and optimization of components,
assets and processes with data
Industry Knowhow
Operations, Maintenance, Data Architecture,
OT & IT
3. Software:
APM Studio
Software Platform for
developing and deploying
Industry 4.0 applications
In a low/no code
environment
Condition
Based
Monitoring
Predictive
Maintenance
Digital Twin
Advanced
Alarm
Management
Low/No Code
High accuracy
by combining
casual models
with Machine
Learning
Scalable to
1000’s of
devices
As Platform
Asset Owners
As Micro Service
OEMs & Skid Providers
4. What do we do every day:
Use Cases
& Business
Cases
Model
Development
Data
Collection
Hardware &
APM
Software
6. Digital Twins .. Everywhere .. But What is it?
“Digital twins are becoming a business imperative,
covering the entire lifecycle of an asset or process
and forming the foundation for connected products
and services. Companies that fail to respond will be
left behind.” – Thomas Kaiser, SAP Senior Vice
President of IoT
“The concept is exciting, absolutely, but more
complex than one can be led to believe. Today there
is a naiveté in many companies about the cost and
time aspects.” – Marc Halpern, Gartner Analyst
7. Digital Twins Standard!
digital twin
<manufacturing> fit for purpose digital representation (3.2.2) of an observable manufacturing
element with synchronization between the element and its digital representation
digital representation
<manufacturing> data element representing a set of properties of an observable
manufacturing element (3.2.5)
observable manufacturing element OME
item that has an observable physical presence or operation in manufacturing.
Note 1 to entry: Observable manufacturing elements include personnel, equipment,
material, process, facility, environment, product, and supporting document.
8. A digital twin is a digital representation of a real-world entity or system. The implementation of a
digital twin is an encapsulated software object or model that mirrors a unique physical object,
process, organization, person or other abstraction. Data from multiple digital twins can be
aggregated for a composite view across a number of real-world entities, such as a power plant or
a city, and their related processes.
The Digital Twin
Source: Gartner
Inputs
Real World Objects
Processing –
Simulate/Predict
Outputs
(To act upon)
9. Types of Digital Twins
Process Twin
System/Unit Twin
Asset Twin
Component Twin
Manufacturing Process
Crude Unit, Cooling
Unit , …
Turbine, Motor,
Valve …
Bearing, Piston,
Axle, …
Field Services
Management
Designers
Product
Managers
Marketing/Sales
10. Elements of a Digital Twin
1) Physical Equipment
2) Twin Model
(Data)
3) Knowledge
4) Analytics
11. Example – FOCUS-ON – Asset/Component Twin
Inspection
Notification
Approval
Work Prep.
Scheduling
Execution
Closeout
Planned
Maint
Breakdown
PdM / CbM
Data Based
Automate
Automate:
NE107 Events straight to your CMMS
Data Based:
CbM and PdM on the basis of the
data in the device
(APM Inside)
12. Example – KUKA- System Unit Twin
Problem
KUKA’s production cell with HELLER Milling Centers is running
non-stop, including the weekends without personnel. During the
weekends some of the tools run to the end of their lifecycle and
the whole production stops until Monday morning when work-floor
personnel comes back.
This results in lost production and lost profits.
Solution
UReason developed an algorithmic tool change recommender
inside APM Studio. The recommender system uses the remaining
tool lifecycle, available pieces and production programs to advise
the work-floor personnel what concrete actions to take regarding
tool change to ensure the most optimal production over the
weekend.
https://www.ureason.com/resources/ureason-supports-
kuka-towards-more-autonomous-production/
13. Example – Airborne – Process Twin
Problem
Airborne’s production cell produces laminates that have very tight
quality specs – too large gap width results in product loss and
lost profits.
Solution
UReason deployed an APM-Studio solution that provides early
insights into gap-width using ML models developed by Vortech.
The system predicts upcoming gap-width such that
operations/control can intervene
https://www.ureason.com/resources/whitepaper-
machine-learning-in-de-fabriek/
14. Qualifying Criteria for Digital Twins
Criteria to consider for Digital Twins and Data Driven Maintenance:
1. Data from field level can be extracted and enhanced to provide further insights
higher in the chain.
2. The problem must be support business case, meaning there should be a target or an
outcome to predict/calculate that is of value to operations.
3. Preferably the problem should have a record of the operational history of the
equipment that contains both good and bad outcomes.
4. The business should have domain experts who have a clear understanding of the
problem.
16. Summary
Source: Reddit MythBusters
1. Digital Twins are not a Myth!
2. Industry Standard: ISO 23247
3. Different Levels of Digital Twins with different
use-cases
4. Elements of a Digital Twin
5. Criteria before starting
https://www.ureason.com/resources/article-how-
creating-a-digital-twin-helps-plants-run-better/
Hello everyone welcome to this webinar hosted by UReason.
In this webinar we will look at how Digital Twins can help you in the world of Data Driven Maintenance. This short webinar has four parts:
First, I will briefly introduce the company UReason
After this I will go into detail on the different types of Digital Twins, the elements that make a Digital Twin and criteria for Digital Twins to be used in Data Driven maintenance initiatives
Then we have a look at some of the examples of Digital Twins that we have been active in
And this is followed by a Q&A session
I am Jules Oudmans presenting to you today I have a background in AI starting in the nineties and have been involved many times in the past 25 years in prognostic and predictive programs that ensure asset integrity for critical assets and critical processes. I have a mixed background in physics, mechanical engineering and computer science .. And my motto is alike the famous coffee one … “Digital, What Else”
I work at UReason, a software company, that provides solutions for real-time condition based and predictive maintenance and I help our customers daily to use our software – from data analysis to the set-up of applications and solutions that monitor important assets and processes.
At UReason we combine our domain expertise and software knowledge with our customers, and I help them from data to solutions.
We have offices in Rotterdam, which we see here in the pictures, and Wokingham in the UK.
Our customers are predominantly in the manufacturing and process industry and the majority of them are located in Western Europe and North Americas.
Our software, APM Studio, is used at different levels in the automation pyramid
Embedded – with OEMS – monitoring Faults and Risks in ‘isolation’ to the asset. An Asset can be instrumentation an actuator or a pump, compressor, filter et cetera
At the Edge … processing asset data of one or multiple assets to run condition monitoring and predictive applications near to where the data is generated.
AND we also work at Level-2/Level-3 where APM is used to monitor faults in relationship to the process, deployed/running on on-premise compute
When deployed at Level-4 and Level 5 APM-Studio is used for optimizing the maintenance costs and planning associated to an asset base supporting a process.
UReason is active in real-time condition monitoring, predictive and prescriptive maintenance.
Our field of operation is from helping customers to insights into data to helping Asset Owners, OEMs and maintenance service organisations with data driven maintenance solutions.
Often, we start together with our customers to define the business cases and use-cases to focus on, followed by data collection, model development and deploying the solutions into the existing OT and IT landscape.
We work and deploy our software APM Studio for Manufacturing companies .. This is about 40% of our business and we work a lot with OEMs and Skid Builders.
For OEMs Their Focus: is to Maintain margin, provide Data integration, New services/business models and Staying relevant for the customer. They typically focus on The Value of the Asset.
For Asset Owners Their Focus is to Lower (energy) costs, Reduce planned maintenance, Reduce reactive maintenance, Stretch asset life, Balance risks and Optimize planning For Asset Owners it is all about The Value of the Asset Supporting the Process.
So in today’s webinar I want to introduce the topic of Digital Twins.
Digital Twins get a lot of attention in the media and I would like to break down in this webinar what a Digital Twin means and how to apply it in the world of Data Driven Maintenance.
There is a lot of unclarity in my opinion about what a Digital Twin is/what they are .. BUT ExxonMobil released a pretty nice infographic that explains it in the right way.
In this Venn diagram we see .. (read out/explain diagram)
And to top the ExxonMobil Venn Diagram of a Digital Twin there is a ISO Standard that describes the concepts, limitations, boundaries and requirements as well as providing reference models and different views (functional information and network view)
I mention this cause most of the customers we work with are unaware of this standardization effort that can help you create some clarity!
Even though there is a ISO standard I like the Gartner definition better .. Because it is more concrete
Digital Twin technology brings an exact replica in digital format, so in software format, of a process, a product, or a service. Basically, it takes real-world data about a physical object or system as inputs, and produces outputs in the form of predictions or simulations of how that physical object or system will be affected by those inputs.
The most common use case for digital twins are:
Visualization of products in use, by real users, in real-time
Troubleshooting of remote or inaccessible equipment
Managing complexities and linkage within systems-of-systems
Connecting disparate systems and promoting traceability
There are different types or levels of Digital Twins and these have different Use Case Scenarios:
The Component Twin for example a Bearing, Piston, Axle : Can support field services/technicians to continuously monitor and offer predictive maintenance insights while reducing equipment downtime (planned and unplanned) and enable service-based business models.
An Asset Twin for example a Turbine, Motor, Control Valve: Can support product management, sales to gather knowledge on customer’s preferences and actual usage of their product and provide new service business models to drive revenue.
System / Unit Twin For example a Crude or Reverse Osmosis Unit: Helps product designers, architects, and engineers to improve future product versions and engineering models to optimize product performance and efficiency, accelerating time-to-market.
Process Twin For example a Composite Manufacturing Process: Helps management to get new operational data feeds into production and planning models thus paving way for strategic insights, recommendations, and road maps.
A Digital Twin consists of 4 main elements:
… The Physical equipment - the actual equipment item or items that we are interested in creating a twin for.
… The Twin Model – The software model consisting of a hierarchy of systems, sub-assemblies, and components that describe the twin and its characteristics enriched by asset, operational, historical, and context data.
… Knowledge - Data sources that feed the twin with operational settings, domain expertise, historical data, and industry best practices.
… Analytics – Simulation and/or Machine Learning models these can be physics-based models, statistical models, and machine learning/AI models to help describe, predict and prescribe the behavior (current and future) of the asset, system, or process.
In the screenshots in this slide I am showing various parts of our software, APM Studio, that is used to set-up digital twins
Now let us look at some examples of Digital Twins ..
Here we have the FOCUS-1 device .. It has onboard an asset DT build in our software APM Studio.. Allowing Cruise Control in Asset Management by
Providing embedded diagnostics
Embedded Soft Sensors for critical measurements – true digital software twins .. And
It is capable of reporting what process conditions are taking place (such as cavitation/flashing) and what maintenance field support is required.
An example at the Unit level is one at KUKA where our software APM Studio provides recommendations in remaining useful lifetime of tools in the production center in relation to the jobs the center has to fulfill. Digital data twins and AI models support the site to have less disruptions during unmanned production periods and increases the output of the cells in the Augsburg facility.
A process Twin example is one at Airborne .. Here AI models that predict the gap-width of the laminates that are produced help operations/control to make in-line adjustments reducing scrap and improving overall output.
Now … Not all use cases or business problems can be effectively solved by predictive maintenance using a Digital Twin. They are not a cure to all illnesses.
The important qualifying criteria that you need to consider during use case qualification for Digital Twin projects are:
An obvious one; Data from field level can be extracted and enhanced to provide further insights higher in the chain.
The problem must be support business case, meaning there should be a target or an outcome to predict/calculate that is of value to operations.
Preferably the problem should have a record of the operational history of the equipment that contains both good and bad outcomes.
Finally, the business should have domain experts who have a clear understanding of the problem.
The steps to realize Digital Twins are quite logical.
Think before you begin .. It is not about trying new technology but selecting a Digital Twin that provides value to your organization/business and or customers.
Value can be reduction of planned maintenance, creating longer preventive maintenance time horizons, knowing hourly/daily what the risk and associated cost of operation is etc cetera
The next step is to build and realize the Digital Twin. What we saw in the previous slides is that you need access to data streams (access to the data from the physical asset), historical data, knowledge/simulation models and criteria – for example I want to now where on the PF curve may asset currently is. Validation is off course key (hence why you need historical data).
Once successfully validated you can deploy and embed the DT in your processes, but you do need to monitor its performance/deviations and embed the life-cycle management of your Digital Twins in your organisation.
To summarize …
DT are not a myth
You have a standard to guide you .. In terminology/structure
There are different levels of DTs .. And you do not need to start with a full DT of your process(es)
Before you start reflect on the criteria I shared with you
Als have a look at the latest article on our website!!
Let me point you out to you that we have a DT model also appified .. In the Valve app which allows you to get easy insights into control valve performance and make the right decision in asset management/planning ..
Let me know if you need additional information on this!
Ok the session is open to questions .. Let me check with Carlos what questions already came in …
We have critical pumps in redundant set-up, is there a value in having digital twins for this?
Can the data I have in my data historian can be used as a basis to develop soft sensors?
Can a component Digital Twin give me indications of remaining useful lifetimes?
Ok that was the last question, thank you very much from my side for listening and asking interesting questions. Here are my contact details and please note that after the webinar you will receive the slides and a link to the replay.
Again we appreciate your feedback .. please leave us your feedback via the evaluation form – see the link in the chat window.
I wish you a nice day and maybe we'll see you in one of our next webinars.