The document provides an introduction to optimization problems. It defines optimization as involving an objective function to minimize or maximize, subject to constraints on variables. It categorizes problems as continuous or discrete and with or without objectives/constraints. Examples covered include shortest path problems, maximum flow problems, transportation problems, and task assignment problems. Algorithms for some problems are also mentioned.
The document discusses linear programming, which is a mathematical modeling technique used to allocate limited resources optimally. It provides examples of linear programming problems and their formulation. Key aspects covered include defining decision variables and constraints, developing the objective function, and interpreting feasible and optimal solutions. Graphical and algebraic solution methods like the simplex method are also introduced.
This document provides an introduction to algorithms and their design and analysis. It discusses what algorithms are, their key characteristics, and the steps to develop an algorithm to solve a problem. These steps include defining the problem, developing a model, specifying and designing the algorithm, checking correctness, analyzing efficiency, implementing, testing, and documenting. Common algorithm design techniques like top-down design and recursion are explained. Factors that impact algorithm efficiency like use of loops, initial conditions, invariants, and termination conditions are covered. Finally, common control structures for algorithms like if/else, loops, and branching are defined.
The document discusses uninformed search techniques. It provides examples of representing problems as states and operators that transform states. This includes problems like the water jug problem, 8-puzzle, and 8-queens. It then describes common uninformed search algorithms like breadth-first search, depth-first search, iterative deepening, and uniform cost search. It analyzes the properties of these algorithms like completeness, time complexity, space complexity, and optimality.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
This document discusses optimization problems and their solutions. It begins by defining optimization problems as seeking to maximize or minimize a quantity given certain limits or constraints. Both deterministic and stochastic models are discussed. Examples of discrete optimization problems include the traveling salesman and shortest path problems. Solution methods mentioned include integer programming, network algorithms, dynamic programming, and approximation algorithms. The document then focuses on convex optimization problems, which can be solved efficiently. It discusses using tools like CVX for solving convex programs and the duality between primal and dual problems. Finally, it presents the collaborative resource allocation algorithm for solving non-convex optimization problems in a suboptimal way.
This document discusses the 0/1 knapsack problem and how it can be solved using backtracking. It begins with an introduction to backtracking and the difference between backtracking and branch and bound. It then discusses the knapsack problem, giving the definitions of the profit vector, weight vector, and knapsack capacity. It explains how the problem is to find the combination of items that achieves the maximum total value without exceeding the knapsack capacity. The document constructs state space trees to demonstrate solving the knapsack problem using backtracking and fixed tuples. It concludes with examples problems and references.
The document discusses greedy algorithms, their characteristics, and an example problem. Greedy algorithms make locally optimal choices at each step in the hope of finding a global optimum. They are simpler and faster than dynamic programming but may not always find the true optimal solution. The coin changing problem is used to illustrate a greedy approach of always selecting the largest valid coin denomination at each step.
The document discusses linear programming, which is a mathematical modeling technique used to allocate limited resources optimally. It provides examples of linear programming problems and their formulation. Key aspects covered include defining decision variables and constraints, developing the objective function, and interpreting feasible and optimal solutions. Graphical and algebraic solution methods like the simplex method are also introduced.
This document provides an introduction to algorithms and their design and analysis. It discusses what algorithms are, their key characteristics, and the steps to develop an algorithm to solve a problem. These steps include defining the problem, developing a model, specifying and designing the algorithm, checking correctness, analyzing efficiency, implementing, testing, and documenting. Common algorithm design techniques like top-down design and recursion are explained. Factors that impact algorithm efficiency like use of loops, initial conditions, invariants, and termination conditions are covered. Finally, common control structures for algorithms like if/else, loops, and branching are defined.
The document discusses uninformed search techniques. It provides examples of representing problems as states and operators that transform states. This includes problems like the water jug problem, 8-puzzle, and 8-queens. It then describes common uninformed search algorithms like breadth-first search, depth-first search, iterative deepening, and uniform cost search. It analyzes the properties of these algorithms like completeness, time complexity, space complexity, and optimality.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
This document discusses optimization problems and their solutions. It begins by defining optimization problems as seeking to maximize or minimize a quantity given certain limits or constraints. Both deterministic and stochastic models are discussed. Examples of discrete optimization problems include the traveling salesman and shortest path problems. Solution methods mentioned include integer programming, network algorithms, dynamic programming, and approximation algorithms. The document then focuses on convex optimization problems, which can be solved efficiently. It discusses using tools like CVX for solving convex programs and the duality between primal and dual problems. Finally, it presents the collaborative resource allocation algorithm for solving non-convex optimization problems in a suboptimal way.
This document discusses the 0/1 knapsack problem and how it can be solved using backtracking. It begins with an introduction to backtracking and the difference between backtracking and branch and bound. It then discusses the knapsack problem, giving the definitions of the profit vector, weight vector, and knapsack capacity. It explains how the problem is to find the combination of items that achieves the maximum total value without exceeding the knapsack capacity. The document constructs state space trees to demonstrate solving the knapsack problem using backtracking and fixed tuples. It concludes with examples problems and references.
The document discusses greedy algorithms, their characteristics, and an example problem. Greedy algorithms make locally optimal choices at each step in the hope of finding a global optimum. They are simpler and faster than dynamic programming but may not always find the true optimal solution. The coin changing problem is used to illustrate a greedy approach of always selecting the largest valid coin denomination at each step.
The document discusses greedy algorithms, which attempt to find optimal solutions to optimization problems by making locally optimal choices at each step that are also globally optimal. It provides examples of problems that greedy algorithms can solve optimally, such as minimum spanning trees and change making, as well as problems they can provide approximations for, like the knapsack problem. Specific greedy algorithms covered include Kruskal's and Prim's for minimum spanning trees.
1. Optimization methods are used widely in business, industry, government and engineering to solve problems involving optimal allocation of limited resources. Many optimization techniques originated during World War II to improve war efforts.
2. A linear programming problem aims to maximize or minimize a linear objective function subject to linear constraints. It has various applications including production scheduling, transportation routing, and cutting stock problems.
3. The document provides an example of using a linear programming model to maximize profits for a pottery company by determining the optimal product mix given constraints on available labor hours and clay materials. Decision variables, objective function, and constraints are defined to formulate the mathematical model.
The document discusses various parallel algorithms for combinatorial optimization problems. It covers topics like branch and bound, backtracking, divide and conquer, and greedy methods. Branch and bound is described as a general algorithm for finding optimal solutions that systematically enumerates candidates and discards subsets that cannot lead to optimal solutions. Backtracking is presented as a systematic way to search a problem space by incrementally building candidates and abandoning partial candidates when they cannot be completed. Divide and conquer is characterized as an approach that breaks problems into subproblems, solves the subproblems, and combines the solutions. Greedy methods are defined as making locally optimal choices at each stage to find a global optimum. Examples like the knapsack problem are provided.
The document provides an agenda for an introduction to computers lab covering repetition structures (loops), introduction to Visual Studio, and C++ programs. It discusses while, do-while, and for loops through flowcharts and C++ code examples. It also covers solutions containers, projects containers, program components in Visual Studio, and provides examples of C++ programs for various algorithms including determining if a number is positive/negative, even/odd, in a range, and baby weight classification. Exercises include finding maximum of numbers, GCD calculation, decimal to binary conversion, and factorial calculation.
This document discusses NP-hard and NP-complete problems. It begins by defining the classes P, NP, NP-hard, and NP-complete. It then provides examples of NP-hard problems like the traveling salesperson problem, satisfiability problem, and chromatic number problem. It explains that to show a problem is NP-hard, one shows it is at least as hard as another known NP-hard problem. The document concludes by discussing how restricting NP-hard problems can result in problems that are solvable in polynomial time.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
The document discusses various parallel algorithms for combinatorial optimization problems. It covers topics like branch and bound, backtracking, divide and conquer, and greedy methods. Branch and bound is described as a general algorithm that uses pruning to discard subsets of solutions that are provably not optimal. Backtracking systematically searches the solution space but abandons partial candidates ("backtracks") when it determines they cannot be completed. Divide and conquer works by recursively breaking problems into independent subproblems until simple enough to solve directly. Greedy algorithms make locally optimal choices at each step to hopefully find a global optimum.
The document outlines the policies, course objectives, and schedule for CHE 536 Engineering Optimization taught by Prof. Shi-Shang Jang at National Tsing Hua University. The class will meet every Thursday from 2-5pm in room 221 of the Chemical Engineering building. The course aims to teach problem formulation, numerical optimization algorithms, and their applications. Homework is due biweekly and grades are based on homework, a midterm exam, and a term project. Topics include single variable optimization, unconstrained optimization, linear programming, and nonlinear programming.
The document discusses constraint-based problem solving, including modeling problems as constraints on acceptable solutions, defining variables and domains, and defining constraints; solving models by defining search spaces and algorithms like backtracking search and stochastic search; and verifying and analyzing solutions. It also provides examples of constraint satisfaction problems and how they can be modeled and solved using different constraint languages and representations.
This document provides an introduction to operations research. It defines operations research as seeking to improve problem solutions through analysis and mathematical models. It gives examples of common optimization problems involving transportation networks, resource allocation, and facility layout. The document classifies optimization problems as either unconstrained or constrained. It explains constrained problems involve an objective function and constraints. Finally, it outlines common solution methods for constrained optimization problems like linear programming.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
The document discusses optimization techniques for finding the minimum or maximum of a function. It begins by distinguishing optimization from root location, noting that optimization involves finding extrema rather than zeros. Several one-dimensional optimization methods are then described, including golden section search, parabolic interpolation, and Newton's method. The document notes that multidimensional optimization poses additional challenges and describes some direct methods that do not require derivatives, such as random search, for tackling multidimensional problems.
The document discusses greedy algorithms and provides examples of how they can be applied to solve optimization problems like the knapsack problem. It defines greedy techniques as making locally optimal choices at each step to arrive at a global solution. Examples where greedy algorithms are used include finding the shortest path, minimum spanning tree (using Prim's and Kruskal's algorithms), job sequencing with deadlines, and the fractional knapsack problem. Pseudocode and examples are provided to demonstrate how greedy algorithms work for the knapsack problem and job sequencing problem.
The document discusses Particle Swarm Optimization (PSO) algorithms and their application in engineering design optimization. It provides an overview of optimization problems and algorithms. PSO is introduced as an evolutionary computational technique inspired by animal social behavior that can be used to find global optimization solutions. The document outlines the basic steps of the PSO algorithm and how it works by updating particle velocities and positions to track the best solutions. Examples of applications to model fitting and inductor design optimization are provided.
This document discusses dynamic programming algorithms and NP-completeness. It begins by outlining 12 steps for proving that a new problem Pnew is NP-complete. It then provides examples of reducing the clique problem to the independent set problem to show they are equivalent. The document explains the concepts of instances, solutions, validity and reductions. It shows how to reduce any NP problem to the circuit satisfiability problem by building a circuit VI(S) that is equivalent to checking if a solution S is valid for a given instance I.
Lecture 1 from https://irdta.eu/deeplearn/2022su/
Covers concepts from Part 1 of my new book, https://meyn.ece.ufl.edu/2021/08/01/control-systems-and-reinforcement-learning/
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
The document discusses greedy algorithms, which attempt to find optimal solutions to optimization problems by making locally optimal choices at each step that are also globally optimal. It provides examples of problems that greedy algorithms can solve optimally, such as minimum spanning trees and change making, as well as problems they can provide approximations for, like the knapsack problem. Specific greedy algorithms covered include Kruskal's and Prim's for minimum spanning trees.
1. Optimization methods are used widely in business, industry, government and engineering to solve problems involving optimal allocation of limited resources. Many optimization techniques originated during World War II to improve war efforts.
2. A linear programming problem aims to maximize or minimize a linear objective function subject to linear constraints. It has various applications including production scheduling, transportation routing, and cutting stock problems.
3. The document provides an example of using a linear programming model to maximize profits for a pottery company by determining the optimal product mix given constraints on available labor hours and clay materials. Decision variables, objective function, and constraints are defined to formulate the mathematical model.
The document discusses various parallel algorithms for combinatorial optimization problems. It covers topics like branch and bound, backtracking, divide and conquer, and greedy methods. Branch and bound is described as a general algorithm for finding optimal solutions that systematically enumerates candidates and discards subsets that cannot lead to optimal solutions. Backtracking is presented as a systematic way to search a problem space by incrementally building candidates and abandoning partial candidates when they cannot be completed. Divide and conquer is characterized as an approach that breaks problems into subproblems, solves the subproblems, and combines the solutions. Greedy methods are defined as making locally optimal choices at each stage to find a global optimum. Examples like the knapsack problem are provided.
The document provides an agenda for an introduction to computers lab covering repetition structures (loops), introduction to Visual Studio, and C++ programs. It discusses while, do-while, and for loops through flowcharts and C++ code examples. It also covers solutions containers, projects containers, program components in Visual Studio, and provides examples of C++ programs for various algorithms including determining if a number is positive/negative, even/odd, in a range, and baby weight classification. Exercises include finding maximum of numbers, GCD calculation, decimal to binary conversion, and factorial calculation.
This document discusses NP-hard and NP-complete problems. It begins by defining the classes P, NP, NP-hard, and NP-complete. It then provides examples of NP-hard problems like the traveling salesperson problem, satisfiability problem, and chromatic number problem. It explains that to show a problem is NP-hard, one shows it is at least as hard as another known NP-hard problem. The document concludes by discussing how restricting NP-hard problems can result in problems that are solvable in polynomial time.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
The document discusses various parallel algorithms for combinatorial optimization problems. It covers topics like branch and bound, backtracking, divide and conquer, and greedy methods. Branch and bound is described as a general algorithm that uses pruning to discard subsets of solutions that are provably not optimal. Backtracking systematically searches the solution space but abandons partial candidates ("backtracks") when it determines they cannot be completed. Divide and conquer works by recursively breaking problems into independent subproblems until simple enough to solve directly. Greedy algorithms make locally optimal choices at each step to hopefully find a global optimum.
The document outlines the policies, course objectives, and schedule for CHE 536 Engineering Optimization taught by Prof. Shi-Shang Jang at National Tsing Hua University. The class will meet every Thursday from 2-5pm in room 221 of the Chemical Engineering building. The course aims to teach problem formulation, numerical optimization algorithms, and their applications. Homework is due biweekly and grades are based on homework, a midterm exam, and a term project. Topics include single variable optimization, unconstrained optimization, linear programming, and nonlinear programming.
The document discusses constraint-based problem solving, including modeling problems as constraints on acceptable solutions, defining variables and domains, and defining constraints; solving models by defining search spaces and algorithms like backtracking search and stochastic search; and verifying and analyzing solutions. It also provides examples of constraint satisfaction problems and how they can be modeled and solved using different constraint languages and representations.
This document provides an introduction to operations research. It defines operations research as seeking to improve problem solutions through analysis and mathematical models. It gives examples of common optimization problems involving transportation networks, resource allocation, and facility layout. The document classifies optimization problems as either unconstrained or constrained. It explains constrained problems involve an objective function and constraints. Finally, it outlines common solution methods for constrained optimization problems like linear programming.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
The document discusses optimization techniques for finding the minimum or maximum of a function. It begins by distinguishing optimization from root location, noting that optimization involves finding extrema rather than zeros. Several one-dimensional optimization methods are then described, including golden section search, parabolic interpolation, and Newton's method. The document notes that multidimensional optimization poses additional challenges and describes some direct methods that do not require derivatives, such as random search, for tackling multidimensional problems.
The document discusses greedy algorithms and provides examples of how they can be applied to solve optimization problems like the knapsack problem. It defines greedy techniques as making locally optimal choices at each step to arrive at a global solution. Examples where greedy algorithms are used include finding the shortest path, minimum spanning tree (using Prim's and Kruskal's algorithms), job sequencing with deadlines, and the fractional knapsack problem. Pseudocode and examples are provided to demonstrate how greedy algorithms work for the knapsack problem and job sequencing problem.
The document discusses Particle Swarm Optimization (PSO) algorithms and their application in engineering design optimization. It provides an overview of optimization problems and algorithms. PSO is introduced as an evolutionary computational technique inspired by animal social behavior that can be used to find global optimization solutions. The document outlines the basic steps of the PSO algorithm and how it works by updating particle velocities and positions to track the best solutions. Examples of applications to model fitting and inductor design optimization are provided.
This document discusses dynamic programming algorithms and NP-completeness. It begins by outlining 12 steps for proving that a new problem Pnew is NP-complete. It then provides examples of reducing the clique problem to the independent set problem to show they are equivalent. The document explains the concepts of instances, solutions, validity and reductions. It shows how to reduce any NP problem to the circuit satisfiability problem by building a circuit VI(S) that is equivalent to checking if a solution S is valid for a given instance I.
Lecture 1 from https://irdta.eu/deeplearn/2022su/
Covers concepts from Part 1 of my new book, https://meyn.ece.ufl.edu/2021/08/01/control-systems-and-reinforcement-learning/
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
Similaire à Introduction to Optimization revised.ppt (20)
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
2. Content
What is Optimization?
Categorization of Optimization Problems
Some Optimization Problems
3. What is Optimization?
Objective Function
– A function to be minimized or maximized
Unknowns or Variables
– Affect the value of the objective function
Constraints
– Restrict unknowns to take on certain values but
exclude others
4. What is Optimization?
Objective Function
– A function to be minimized or maximized
Unknowns or Variables
– Affect the value of the objective function
Constraints
– Restrict unknowns to take on certain values but
exclude others
5. Example:
0-1 Knapsack Problem
Which boxes should be chosen
to maximize the amount of
money while still keeping the
overall weight under 15 kg ?
6. Example:
0-1 Knapsack Problem
1 {0,1}
x
2 {0,1}
x
3 {0,1}
x
5 {0,1}
x
4 {0,1}
x
Objective Function
Unknowns or Variables
Constraints
's, 1, ,5
i
x i
, ,
0,1 1, 5
i
x i
1 2 3 4 5
4 2 10 2 1
Maximize
x x x x x
1 2 3 4 5
12 1
Subject t
15
4 2
o
1
x x x x x
7. Example:
0-1 Knapsack Problem
1 2 3 4 5
4 2 10 2
Maxim 1
ize x x x x x
1 2 3 4 5
12 1 4 2
Subject to 15,
0,1 , 1, ,5
1
i
x x x x x
x i
9. Minimum and maximum value of a
function
This denotes the minimum value of the
objective function , when choosing x from
the set of real numbers . The minimum value
in this case is 1, occurring at x=0 .
10. Maximum value of a function
Similarly, the notation
asks for the maximum value of the objective
function 2x, where x may be any real
number. In this case, there is no such
maximum as the objective function is
unbounded, so the answer is "infinity" or
"undefined".
11. By convention, the standard form of an optimization
problem is stated in terms of minimization.
Generally, unless both the objective function and the
feasible region are convex in a minimization
problem, there may be several local minima, where
a local minimum x* is defined as a point for which
there exists some δ > 0 so that for all x such that
the expression
holds; that is to say, on some region around x* all of
the function values are greater than or equal to the
value at that point. Local maxima are defined
similarly.
13. Optimization Problems w/o
Objective Function
In some cases, the goal is to find a set of
variables that satisfies some constraints only, e.g.,
– circuit layouts
– n-queen
– Sudoku
This type of problems is usually called a
feasibility problem or constraint satisfaction
problem.
• Objective Function
• Unknowns or Variables
• Constraints
14. Optimization Problems w/
Multiple Objective Functions
Sometimes, we need to optimize a number of different
objectives at once, e.g.,
– In the panel design problem, it would be nice to minimize
weight and maximize strength simultaneously.
Usually, the different objectives are not compatible
– The variables that optimize one objective may be far from
optimal for the others.
In practice, problems with multiple objectives are
reformulated as single-objective problems by either
forming a weighted combination of the different objectives
or else replacing some of the objectives by constraints.
• Objective Function
• Unknowns or Variables
• Constraints
15. Variables
• Objective Function
• Unknowns or Variables
• Constraints
Variables are essential.
– Without variables, we cannot define the objective
function and the problem constraints.
Continuous Optimization
– all the variables are allowed to take values from
subintervals of the real line;
Discrete Optimization
– require some or all of the variables to have integer values.
16. Constraints
• Objective Function
• Unknowns or Variables
• Constraints
Constraints are not essential.
Unconstrained optimization
– A large and important one for which a lot of algorithms
and software are available.
However, almost all problems really do have
constraints, e.g.,
– Any variable denoting the “number of objects” in a system
can only be useful if it is less than the number of
elementary particles in the known universe!
– In practice, though, answers that make good sense in
terms of the underlying physical or economic problem can
often be obtained without putting constraints on the
variables.
20. Some Well-Known Algorithms
for Shortest Path Problems
Dijkstra's algorithm — solves single source problem if all
edge weights are greater than or equal to zero.
Bellman-Ford algorithm — solves single source problem if
edge weights may be negative.
A* search algorithm — solves for single source shortest
paths using heuristics to try to speed up the search.
Floyd-Warshall algorithm — solves all pairs shortest paths.
Johnson's algorithm — solves all pairs shortest paths, may
be faster than Floyd-Warshall on sparse graphs.
Perturbation theory — finds the locally shortest path.
21. Maximum Flow Problem
Given: Directed graph G=(V, E),
Supply (source) node O, demand (sink) node T
Capacity function u: E R .
Goal:
– Given the arc capacities,
send as much flow as possible
from supply node O to demand node T
through the network.
Example:
4
4
5
6
4
4
5
5
O
A
D
B
C
T
22. Towards the Augmenting Path Algorithm
Idea: Find a path from the source to the sink,
and use it to send as much flow as possible.
In our example,
5 units of flow can be sent through the path O B D T
Then use the path O C T to send 4 units of flow.
The total flow is 5 + 4 = 9 at this point.
Can we send more?
O
A
D
B
C
T
4
4
5 6
4
4 5
5
5 5
4
4
5
23. Towards the Augmenting Path Algorithm
If we redirect 1 unit of flow
from path O B D T to path O B C T,
then the freed capacity of arc D T could be used
to send 1 more unit of flow through path O A D T,
making the total flow equal to 9+1=10 .
To realize the idea of redirecting the flow in a systematic
way, we need the concept of
residual capacities.
O
A
D
B
C
T
4
4
5 6
4
4 5
5
5 5
5
4
4
4
4
1 5
1 1
26. Hitchcock Transportation Problems
Find a minimal cost transportation.
A problem’s argument
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
n
c c c
c
s
d d d d
c c
c c c
T
m
Supply
Demand
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
x x x
x
s
x
d d d
x
T
x
x x
d
n
m
Supply
Demand
An answer
1 1
m n
i j
j
j i
i x
c
Minimize
27. Hitchcock Transportation Problems
Find a minimal cost transportation.
A problem’s argument
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
n
c c c
c
s
d d d d
c c
c c c
T
m
Supply
Demand
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
x x x
x
s
x
d d d
x
T
x
x x
d
n
m
Supply
Demand
An answer
1 1
m n
i j
j
j i
i x
c
minimize
28. Hitchcock Transportation Problems
1 1
m n
i j
j
j i
i x
c
Minimize
1
1
1, ,
1, ,
0 1, , and 1, ,
i
j
n
j
m
i
ij
ij
ij
i m
j n
i m j n
s
x
x
x
d
Subject to
A problem’s argument
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
n
c c c
c
s
d d d d
c c
c c c
T
m
Supply
Demand
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
n
c c c
c
s
d d d d
c c
c c c
T
m
Supply
Demand
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
x x x
x
s
x
d d d
x
T
x
x x
d
n
m
Supply
Demand
11 12 1
21 22 2
1
1
1
2
1
4
2
2
1 2
1
2
n
m
n
m mn
j
n
s
s
s
x x x
x
s
x
d d d
x
T
x
x x
d
n
m
Supply
Demand
An answer
29. Task Assignment Problems
Example: Machineco has four jobs to be completed. Each
machine must be assigned to complete one job. The time
required to setup each machine for completing each job is
shown in the table below. Machinco wants to minimize the
total setup time needed to complete the four jobs.
Problem
Instance
Time (Hours)
Job1 Job2 Job3 Job4
Machine 1 14 5 8 7
Machine 2 2 12 6 5
Machine 3 7 8 3 9
Machine 4 2 4 6 10
33. Task Assignment Problems
1 1
m m
i j
j
j i
i x
c
Minimize
1
1
1, ,
1, ,
0,1 1, , and 1,
1
,
1
ij
i
j
i
j
i
n
j
m
i m
j m
x
i m j
x
m
x
Subject to
45. Bin-Packing Problems
Determine how to put the given objects in
the least number of fixed space bins.
There are many variants, such as, 3D, 2D,
linear, pack by volume, pack by weight,
minimize volume, maximize value, fixed
shape objects, etc.
46. Bin-Packing Problems
Example: suppose we need a
number of pipes of different,
specific lengths to plumb a
house and we can buy pipe
stock in 5 meter lengths.
How can we cut the 5 meter
pipes to waste as little as
possible, i.e., to minimize the
cost of pipe?