Selection sort works by iteratively finding the minimum element in the unsorted portion of an array and swapping it into the sorted position. It has a time complexity of O(n^2) and is inefficient for large data sets. Bubble sort compares adjacent elements and swaps them if out of order, causing the largest elements to "bubble" to the end with each pass. Insertion sort inserts elements into the sorted portion of the array by shifting greater elements to make room. Both bubble and insertion sort have quadratic time complexity but insertion sort is generally more efficient due to less element swapping.
This document defines and describes three sorting algorithms: selection sort, exchange sort, and insertion sort. Selection sort works by selecting the first element and comparing it to all other elements, swapping elements to put the smallest in the top position, repeating this for each element. Exchange sort is similar to bubble sort, comparing adjacent elements and swapping them if out of order. Insertion sort works like sorting a hand of cards, removing one card at a time from the table and inserting it into the correct position in your hand.
This document discusses bubble sort, selection sort, and insertion sort algorithms. It provides descriptions of how each algorithm works, including pseudocode examples. Bubble sort compares adjacent element pairs and swaps them if out of order, taking Ο(n2) time. Selection sort finds the minimum element and swaps it into the sorted portion of the array in each pass. Insertion sort maintains a sorted sub-list, inserting new elements in the appropriate place to build the full sorted list.
This document discusses two sorting algorithms: selection sort and insertion sort. Selection sort works by finding the smallest element in the unsorted array and swapping it into the sorted position. This continues until the array is fully sorted. Insertion sort shifts elements in the sorted portion of the array to make room to insert new elements in sorted order. It is more efficient than selection sort for smaller datasets or datasets that are already partially sorted. Pseudocode and examples are provided to illustrate how each algorithm works.
This document discusses two sorting algorithms: selection sort and insertion sort. Selection sort works by finding the smallest element in the unsorted array and swapping it into the sorted position. This continues until the array is fully sorted. Insertion sort shifts elements in the sorted portion of the array to make room to insert new elements in sorted order. It is more efficient than selection sort for smaller datasets or datasets that are already partially sorted. Pseudocode and examples are provided to illustrate how each algorithm works.
Selection sort is an in-place comparison-based sorting algorithm that divides an array into two parts - a sorted part on the left and an unsorted part on the right. It finds the smallest element in the unsorted part and swaps it with the leftmost element to add it to the sorted part. This process continues moving the unsorted boundary to the right until the array is fully sorted. Selection sort has an average and worst-case time complexity of O(n2) making it unsuitable for large data sets.
Selection sort and insertion sort are both simple in-place comparison sorting algorithms. Selection sort works by iterating through the list and swapping elements to put the smallest remaining element in the sorted position at each step. Insertion sort maintains a sorted sub-list, scanning unsorted items and inserting them into the correct position within the sub-list. Both algorithms have Ο(n2) time complexity, making them inefficient for large data sets.
The document discusses sorting algorithms. It begins by defining sorting and listing common types of sorts like bubble sort, selection sort, insertion sort, merge sort, and radix sort. It then focuses on explaining selection sort, insertion sort, and radix sort in more detail. For each algorithm, it provides an overview of how it works, discusses space and time complexity, and provides examples. Key points made include that selection sort finds the minimum element and swaps it into position each pass, insertion sort maintains a sorted sub-list and inserts new elements, and radix sort sorts based on digits from the least to most significant place value in multiple passes.
This document defines and describes three sorting algorithms: selection sort, exchange sort, and insertion sort. Selection sort works by selecting the first element and comparing it to all other elements, swapping elements to put the smallest in the top position, repeating this for each element. Exchange sort is similar to bubble sort, comparing adjacent elements and swapping them if out of order. Insertion sort works like sorting a hand of cards, removing one card at a time from the table and inserting it into the correct position in your hand.
This document discusses bubble sort, selection sort, and insertion sort algorithms. It provides descriptions of how each algorithm works, including pseudocode examples. Bubble sort compares adjacent element pairs and swaps them if out of order, taking Ο(n2) time. Selection sort finds the minimum element and swaps it into the sorted portion of the array in each pass. Insertion sort maintains a sorted sub-list, inserting new elements in the appropriate place to build the full sorted list.
This document discusses two sorting algorithms: selection sort and insertion sort. Selection sort works by finding the smallest element in the unsorted array and swapping it into the sorted position. This continues until the array is fully sorted. Insertion sort shifts elements in the sorted portion of the array to make room to insert new elements in sorted order. It is more efficient than selection sort for smaller datasets or datasets that are already partially sorted. Pseudocode and examples are provided to illustrate how each algorithm works.
This document discusses two sorting algorithms: selection sort and insertion sort. Selection sort works by finding the smallest element in the unsorted array and swapping it into the sorted position. This continues until the array is fully sorted. Insertion sort shifts elements in the sorted portion of the array to make room to insert new elements in sorted order. It is more efficient than selection sort for smaller datasets or datasets that are already partially sorted. Pseudocode and examples are provided to illustrate how each algorithm works.
Selection sort is an in-place comparison-based sorting algorithm that divides an array into two parts - a sorted part on the left and an unsorted part on the right. It finds the smallest element in the unsorted part and swaps it with the leftmost element to add it to the sorted part. This process continues moving the unsorted boundary to the right until the array is fully sorted. Selection sort has an average and worst-case time complexity of O(n2) making it unsuitable for large data sets.
Selection sort and insertion sort are both simple in-place comparison sorting algorithms. Selection sort works by iterating through the list and swapping elements to put the smallest remaining element in the sorted position at each step. Insertion sort maintains a sorted sub-list, scanning unsorted items and inserting them into the correct position within the sub-list. Both algorithms have Ο(n2) time complexity, making them inefficient for large data sets.
The document discusses sorting algorithms. It begins by defining sorting and listing common types of sorts like bubble sort, selection sort, insertion sort, merge sort, and radix sort. It then focuses on explaining selection sort, insertion sort, and radix sort in more detail. For each algorithm, it provides an overview of how it works, discusses space and time complexity, and provides examples. Key points made include that selection sort finds the minimum element and swaps it into position each pass, insertion sort maintains a sorted sub-list and inserts new elements, and radix sort sorts based on digits from the least to most significant place value in multiple passes.
The document discusses sorting algorithms including bubble sort, selection sort, insertion sort, and merge sort. It provides pseudocode and explanations of how each algorithm works. Bubble sort, selection sort, and insertion sort have O(n2) runtime and are best for small datasets, while merge sort uses a divide-and-conquer approach to sort arrays with O(n log n) runtime, making it more efficient for large datasets. Radix sort is also discussed as an alternative sorting method that is optimized for certain data types.
The document discusses the array data structure and the bubble sort algorithm. Key points:
- An array is a linear data structure that stores elements in contiguous memory locations. It is defined by its size and elements can be accessed via indices.
- Bubble sort works by repeatedly "bubbling up" the largest value to its sorted position. It compares adjacent elements and swaps them if out of order.
- The bubble sort algorithm takes O(n^2) time since in the worst case, each of the n passes could involve n-1 swaps. It is one of the simplest sorting algorithms but has poor time complexity.
The document discusses various sorting techniques used in computer science. It describes insertion sort, selection sort, and merge sort. Insertion sort maintains a sorted sub-list and inserts new elements into the correct position within the sub-list. Selection sort divides the list into sorted and unsorted parts, selecting the minimum element from unsorted each time. Merge sort divides the list into halves recursively until single elements remain, then merges the halves back together in sorted order.
The document discusses bubble sort, a simple sorting algorithm where each pair of adjacent elements is compared and swapped if in the wrong order. This process is repeated, with the highest/lowest value "bubbling" to the top/bottom, until the list is fully sorted. Although simple, bubble sort is slow compared to other algorithms. It can be useful if data is usually sorted but with some out-of-order elements. Pseudocode and an example are provided to illustrate the bubble sort process.
This document discusses different types of sorting methods. It describes internal sorting, which sorts data within main memory, and external sorting, which sorts data that exceeds main memory size. The document outlines common sorting algorithms like bubble sort, selection sort, and insertion sort. It provides details on the sorting process and algorithms for each method.
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
The document discusses various sorting and searching algorithms:
- Bubble sort, selection sort, merge sort, quicksort are sorting algorithms that arrange data in a particular order like numerical or alphabetical.
- Linear search and binary search are searching algorithms where linear search sequentially checks each item while binary search divides the data set in half with each comparison.
- Examples are provided to illustrate how each algorithm works step-by-step on sample data sets.
This document discusses several sorting algorithms including selection sort, insertion sort, and bubble sort. Selection sort works by iterating through an array, finding the minimum value, and swapping it into the first position. Insertion sort iterates through the array and inserts each element into its sorted position by shifting other elements over. Examples of both algorithms are provided along with pseudocode. The document provides detailed explanations of how each algorithm functions to sort an array of values.
The document discusses various searching and sorting algorithms. It describes linear search, binary search, and interpolation search for searching unsorted and sorted lists. It also explains different sorting algorithms like bubble sort, selection sort, insertion sort, quicksort, shellsort, heap sort, and merge sort. Linear search searches sequentially while binary search uses divide and conquer. Sorting algorithms like bubble sort, selection sort, and insertion sort are in-place and have quadratic time complexity in the worst case. Quicksort, mergesort, and heapsort generally have better performance.
Selection-sort-in-algorithm and complexity.pptxArjayBalberan1
Selection sort is a simple sorting algorithm that works by repeatedly finding the minimum element in the unsorted portion of the array and swapping it into the sorted position. It divides the array into a sorted and unsorted portion, initially with the entire array unsorted. It iterates through the unsorted portion, finds the minimum element, and swaps it into the sorted portion. This continues until the entire array is sorted. The algorithm has an average and worst-case time complexity of O(n2) due to its use of nested loops. A flowchart demonstrates the step-by-step process of selection sort on an example array.
The document discusses various searching and sorting algorithms including linear/sequential search, binary search, selection sort, bubble sort, insertion sort, quick sort, and merge sort. It provides descriptions of each algorithm and examples to illustrate how they work on sample data sets. Key steps and properties of each algorithm are outlined such as complexity, how elements are compared and swapped during sorting, and dividing arrays during searching.
The document summarizes various sorting algorithms:
- Bubble sort works by repeatedly swapping adjacent elements that are in the wrong order until the list is fully sorted. It requires O(n^2) time.
- Insertion sort iterates through the list and inserts each element into its sorted position. It is an adaptive algorithm with O(n) time for nearly sorted inputs.
- Quicksort uses a divide and conquer approach, recursively partitioning the list around a pivot element and sorting the sublists. It has average case performance of O(nlogn) time.
The document compares the performance of heap sort and insertion sort algorithms using different sized data sets. It implements both algorithms and analyzes their time complexities in best, average, and worst cases. The results show that insertion sort performs better on small and average sized data, while heap sort scales better to large data sets and has more consistent performance across cases. Heap sort is generally more suitable than insertion sort when sorting large amounts of data.
Bubble sort is a simple sorting algorithm that compares adjacent elements and swaps them if they are in the wrong order. This is repeated for each pair of adjacent elements with at least one swap happening per iteration until the list is fully sorted. Selection sort works by finding the minimum element in the unsorted section and swapping it with the leftmost element to build up the sorted section from left to right. Insertion sort maintains a sorted sub-list and inserts new elements into the correct position in the sub-list by shifting other elements over as needed.
This document provides an overview of sorting algorithms. It defines sorting as arranging data in a particular order like ascending or descending. Common sorting algorithms discussed include bubble sort, selection sort, insertion sort, merge sort, and quick sort. For each algorithm, the working method, implementation in C, time and space complexity is explained. The document also covers sorting terminology like stable vs unstable sorting and adaptive vs non-adaptive algorithms. Overall, the document serves as a comprehensive introduction to sorting and different sorting techniques.
The document discusses various sorting algorithms and their complexity. It begins by defining sorting as arranging data in increasing or decreasing order. It then discusses the complexity of sorting algorithms in terms of comparisons, swaps, and assignments needed. Sorting algorithms are divided into internal sorts, which use only main memory, and external sorts, which use external storage like disks. Popular internal sorting algorithms discussed in detail include bubble sort, selection sort, insertion sort, and merge sort. Bubble sort has a time complexity of O(n2) while merge sort and quicksort have better time complexities of O(nlogn).
The document discusses four sorting algorithms: selection sort, bubble sort, insertion sort, and merge sort. It provides pseudocode for the selection sort algorithm and describes how it works by making successive passes through an array and swapping elements until the array is fully sorted. Bubble sort and insertion sort are also described as making multiple passes through an array, comparing and swapping adjacent elements until the array is in order. Pseudocode is provided for the bubble sort and insertion sort algorithms.
The document describes several sorting algorithms:
1) Bubble sort repeatedly compares and swaps adjacent elements, moving the largest values to the end over multiple passes. It has a complexity of O(n^2).
2) Insertion sort inserts elements one by one into the sorted portion of the array by shifting elements and comparing. It is O(n^2) in worst case but O(n) if nearly sorted.
3) Selection sort finds the minimum element and swaps it into the first position in each pass to build the sorted array. It has complexity O(n^2).
4) Merge sort divides the array into halves recursively, then merges the sorted halves to produce the fully sorted array.
It is a presentation on some Searching and Sorting Techniques for Computer Science.
It consists of the following techniques:
Sequential Search
Binary Search
Selection Sort
Bubble Sort
Insertion Sort
1. The document discusses various data structures concepts including arrays, dynamic arrays, operations on arrays like traversing, insertion, deletion, sorting, and searching.
2. It provides examples of declaring and initializing arrays, as well as dynamic array allocation using pointers and new/delete operators.
3. Searching techniques like linear search and binary search are explained, with linear search comparing each element sequentially while binary search eliminates half the elements at each step for sorted arrays.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Contenu connexe
Similaire à Sorting Data structure And Algorithm.pptx
The document discusses sorting algorithms including bubble sort, selection sort, insertion sort, and merge sort. It provides pseudocode and explanations of how each algorithm works. Bubble sort, selection sort, and insertion sort have O(n2) runtime and are best for small datasets, while merge sort uses a divide-and-conquer approach to sort arrays with O(n log n) runtime, making it more efficient for large datasets. Radix sort is also discussed as an alternative sorting method that is optimized for certain data types.
The document discusses the array data structure and the bubble sort algorithm. Key points:
- An array is a linear data structure that stores elements in contiguous memory locations. It is defined by its size and elements can be accessed via indices.
- Bubble sort works by repeatedly "bubbling up" the largest value to its sorted position. It compares adjacent elements and swaps them if out of order.
- The bubble sort algorithm takes O(n^2) time since in the worst case, each of the n passes could involve n-1 swaps. It is one of the simplest sorting algorithms but has poor time complexity.
The document discusses various sorting techniques used in computer science. It describes insertion sort, selection sort, and merge sort. Insertion sort maintains a sorted sub-list and inserts new elements into the correct position within the sub-list. Selection sort divides the list into sorted and unsorted parts, selecting the minimum element from unsorted each time. Merge sort divides the list into halves recursively until single elements remain, then merges the halves back together in sorted order.
The document discusses bubble sort, a simple sorting algorithm where each pair of adjacent elements is compared and swapped if in the wrong order. This process is repeated, with the highest/lowest value "bubbling" to the top/bottom, until the list is fully sorted. Although simple, bubble sort is slow compared to other algorithms. It can be useful if data is usually sorted but with some out-of-order elements. Pseudocode and an example are provided to illustrate the bubble sort process.
This document discusses different types of sorting methods. It describes internal sorting, which sorts data within main memory, and external sorting, which sorts data that exceeds main memory size. The document outlines common sorting algorithms like bubble sort, selection sort, and insertion sort. It provides details on the sorting process and algorithms for each method.
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
The document discusses various sorting and searching algorithms:
- Bubble sort, selection sort, merge sort, quicksort are sorting algorithms that arrange data in a particular order like numerical or alphabetical.
- Linear search and binary search are searching algorithms where linear search sequentially checks each item while binary search divides the data set in half with each comparison.
- Examples are provided to illustrate how each algorithm works step-by-step on sample data sets.
This document discusses several sorting algorithms including selection sort, insertion sort, and bubble sort. Selection sort works by iterating through an array, finding the minimum value, and swapping it into the first position. Insertion sort iterates through the array and inserts each element into its sorted position by shifting other elements over. Examples of both algorithms are provided along with pseudocode. The document provides detailed explanations of how each algorithm functions to sort an array of values.
The document discusses various searching and sorting algorithms. It describes linear search, binary search, and interpolation search for searching unsorted and sorted lists. It also explains different sorting algorithms like bubble sort, selection sort, insertion sort, quicksort, shellsort, heap sort, and merge sort. Linear search searches sequentially while binary search uses divide and conquer. Sorting algorithms like bubble sort, selection sort, and insertion sort are in-place and have quadratic time complexity in the worst case. Quicksort, mergesort, and heapsort generally have better performance.
Selection-sort-in-algorithm and complexity.pptxArjayBalberan1
Selection sort is a simple sorting algorithm that works by repeatedly finding the minimum element in the unsorted portion of the array and swapping it into the sorted position. It divides the array into a sorted and unsorted portion, initially with the entire array unsorted. It iterates through the unsorted portion, finds the minimum element, and swaps it into the sorted portion. This continues until the entire array is sorted. The algorithm has an average and worst-case time complexity of O(n2) due to its use of nested loops. A flowchart demonstrates the step-by-step process of selection sort on an example array.
The document discusses various searching and sorting algorithms including linear/sequential search, binary search, selection sort, bubble sort, insertion sort, quick sort, and merge sort. It provides descriptions of each algorithm and examples to illustrate how they work on sample data sets. Key steps and properties of each algorithm are outlined such as complexity, how elements are compared and swapped during sorting, and dividing arrays during searching.
The document summarizes various sorting algorithms:
- Bubble sort works by repeatedly swapping adjacent elements that are in the wrong order until the list is fully sorted. It requires O(n^2) time.
- Insertion sort iterates through the list and inserts each element into its sorted position. It is an adaptive algorithm with O(n) time for nearly sorted inputs.
- Quicksort uses a divide and conquer approach, recursively partitioning the list around a pivot element and sorting the sublists. It has average case performance of O(nlogn) time.
The document compares the performance of heap sort and insertion sort algorithms using different sized data sets. It implements both algorithms and analyzes their time complexities in best, average, and worst cases. The results show that insertion sort performs better on small and average sized data, while heap sort scales better to large data sets and has more consistent performance across cases. Heap sort is generally more suitable than insertion sort when sorting large amounts of data.
Bubble sort is a simple sorting algorithm that compares adjacent elements and swaps them if they are in the wrong order. This is repeated for each pair of adjacent elements with at least one swap happening per iteration until the list is fully sorted. Selection sort works by finding the minimum element in the unsorted section and swapping it with the leftmost element to build up the sorted section from left to right. Insertion sort maintains a sorted sub-list and inserts new elements into the correct position in the sub-list by shifting other elements over as needed.
This document provides an overview of sorting algorithms. It defines sorting as arranging data in a particular order like ascending or descending. Common sorting algorithms discussed include bubble sort, selection sort, insertion sort, merge sort, and quick sort. For each algorithm, the working method, implementation in C, time and space complexity is explained. The document also covers sorting terminology like stable vs unstable sorting and adaptive vs non-adaptive algorithms. Overall, the document serves as a comprehensive introduction to sorting and different sorting techniques.
The document discusses various sorting algorithms and their complexity. It begins by defining sorting as arranging data in increasing or decreasing order. It then discusses the complexity of sorting algorithms in terms of comparisons, swaps, and assignments needed. Sorting algorithms are divided into internal sorts, which use only main memory, and external sorts, which use external storage like disks. Popular internal sorting algorithms discussed in detail include bubble sort, selection sort, insertion sort, and merge sort. Bubble sort has a time complexity of O(n2) while merge sort and quicksort have better time complexities of O(nlogn).
The document discusses four sorting algorithms: selection sort, bubble sort, insertion sort, and merge sort. It provides pseudocode for the selection sort algorithm and describes how it works by making successive passes through an array and swapping elements until the array is fully sorted. Bubble sort and insertion sort are also described as making multiple passes through an array, comparing and swapping adjacent elements until the array is in order. Pseudocode is provided for the bubble sort and insertion sort algorithms.
The document describes several sorting algorithms:
1) Bubble sort repeatedly compares and swaps adjacent elements, moving the largest values to the end over multiple passes. It has a complexity of O(n^2).
2) Insertion sort inserts elements one by one into the sorted portion of the array by shifting elements and comparing. It is O(n^2) in worst case but O(n) if nearly sorted.
3) Selection sort finds the minimum element and swaps it into the first position in each pass to build the sorted array. It has complexity O(n^2).
4) Merge sort divides the array into halves recursively, then merges the sorted halves to produce the fully sorted array.
It is a presentation on some Searching and Sorting Techniques for Computer Science.
It consists of the following techniques:
Sequential Search
Binary Search
Selection Sort
Bubble Sort
Insertion Sort
1. The document discusses various data structures concepts including arrays, dynamic arrays, operations on arrays like traversing, insertion, deletion, sorting, and searching.
2. It provides examples of declaring and initializing arrays, as well as dynamic array allocation using pointers and new/delete operators.
3. Searching techniques like linear search and binary search are explained, with linear search comparing each element sequentially while binary search eliminates half the elements at each step for sorted arrays.
Similaire à Sorting Data structure And Algorithm.pptx (20)
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
Liberarsi dai framework con i Web Component.pptxMassimo Artizzu
In Italian
Presentazione sulle feature e l'utilizzo dei Web Component nell sviluppo di pagine e applicazioni web. Racconto delle ragioni storiche dell'avvento dei Web Component. Evidenziazione dei vantaggi e delle sfide poste, indicazione delle best practices, con particolare accento sulla possibilità di usare web component per facilitare la migrazione delle proprie applicazioni verso nuovi stack tecnologici.
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
2. SORTING
A process that organizes a collection of
data into either ascending or descending
order.
Can be used as a first step for searching
the data.
Binary Search required a sorted array.
4. SELECTION SORT
It is simple and easy to implement
It is inefficient for large list, usually used to
sort lists of no more than 1000 items
In array of n elements, n-1 iterations are
required to sort the array
5. SELECTION SORT
Select the smallest value from the list.
Bring it to the first location of the list.
Find the next value and repeat the
process by swapping the locations.
6. SELECTION SORT
Suppose the name of the array is A and it
has four elements with the following
values:
4 19 1 3
To sort this array in ascending order, n-1,
i.e. three iterations will be required.
7. SELECTION SORT
Iteration-1
The array is scanned starting from the first to the last element and the
element that has the smallest value is selected. The smallest value is 1
at location 3. The address of element that has the smallest value is
noted and the selected value is interchanged with the first element i.e.
A[1] and A[3] are swapped
4 19 1 3
1 19 4 3
8. SELECTION SORT
Iteration-2
The array is scanned starting from the second to the last element and
the element that has the smallest value is selected. The smallest value
is 3 at location 4. The address of element that has the smallest value is
noted. The selected value is interchanged with the second element i.e.
A[2] and A[4] are swapped
1 19 4 3
1 3 4 19
9. SELECTION SORT
Iteration-3
The array is scanned starting from the third to the last element and the
element that has the smallest value is selected. The smallest value is 4
at location 3. The address of element that has the smallest value is
noted. The selected value is interchanged with the third element i.e.
A[3] and A[3] are swapped
1 3 4 19
1 3 4 19
11. SELECTION SORT
void selectionSort(int numbers[ ], int array_size)
{
int i, j;
int min, temp;
for (i = 0; i < array_size-1; i++)
{
min = i;
for (j = i+1; j < array_size; j++)
{
if (numbers[j] < numbers[min])
min = j;
}
temp = numbers[i];
numbers[i] = numbers[min];
numbers[min] = temp;
}
}
12. BUBBLE SORT
It is the oldest and simplest method and can be
easily implemented
It is also the slowest and considered to be the most
inefficient sorting algorithm
It works by comparing each item in the list with the
item next to it, and swapping them if required
It is used only a small amount of data
13. BUBBLE SORT
1. First Iteration
1. Starting from the first index, compare the first and the second
elements.
2. If the first element is greater than the second element, they are
swapped.
3. Now, compare the second and the third elements. Swap them if
they are not in order.
4. The above process goes on until the last element.
2. The same process goes on for the remaining
iterations.
14. BUBBLE SORT
To sort data in an array of n elements, n-1 iterations are required
Following steps explain sorting of data in an array in acceding order
In first iteration, the largest value moves to the last position in the array
In second iteration, the above process is repeated and the second largest
value moves to the second last position in the array and so on
In n-1 iteration, the data is arranged in ascending order
15. BUBBLE SORT
Suppose the name of the array is A and it
has four elements with the following
values:
4 19 1 3
To sort this array in ascending order, n-1,
i.e. three iterations will be required.
16. BUBBLE SORT
Iteration-1
A[1] is compared with element A[2]. Since 4 is not greater than 19. there will be no
change in the list.
A[2] is compared with element A[3]. Since 19 is greater than 1, the value are
interchanged
A[3] is compared with element A[4]. Since 19 is grater than 3, the value are
interchanged
Thus at the end of the first iteration, the largest value moves to the last
position in the array
4 19 1 3
4 1 19 3
4 1 3 19
17. BUBBLE SORT
Iteration-2
A[1] is compared with element A[2]. Since 4 is greater than 1, the value are
interchanged
A[2] is compared with element A[3]. Since 4 is grater than 3, the value are
interchanged
Thus at the end of the second iteration, the second largest value moves
to the second last position in the array
1 4 3 19
1 3 4 19
18. BUBBLE SORT
Iteration-3
A[1] is compared with element A[2]. Since 1 is not greater than 3, the value are not
interchanged
So array is sorted in ascending order
1 3 4 19
19. BUBBLE SORT
Bubble sort is similar to selection
sort in the sense that it repeatedly
finds the largest/smallest value in the
unprocessed portion of the array and
puts it back.
However, finding the largest
value is not done by selection this
time.
We "bubble" up the largest value
instead.
20. BUBBLE SORT
Compares adjacent items and exchanges them
if they are out of order.
Comprises of several passes.
In one pass, the largest value has been
“bubbled” to its proper position.
In second pass, the last value does not need to
be compared.
21. BUBBLE SORT
void bubbleSort (int a[ ], int n)
{
int i, j, temp;
for(int i = 0; i < n-1; i++)
{
for(int j=0; j < (n-1)-i; j++)
{
if(a[j] > a[j+1])
{
temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
}
}
}
}
23. BUBBLE SORT EXAMPLE
6, 2, 9, 11, 9, 3, 7, 12
2, 6, 9, 11, 9, 3, 7, 12
2, 6, 9, 9, 11, 3, 7, 12
2, 6, 9, 9, 3, 11, 7, 12
2, 6, 9, 9, 3, 7, 11, 12
6, 2, 9, 11, 9, 3, 7, 12
Notice that this time we do not have to compare the last two numbers as we
know the 12 is in position. This pass therefore only requires 6 comparisons.
First Pass
Second Pass
24. BUBBLE SORT EXAMPLE
2, 6, 9, 9, 3, 7, 11, 12
2, 6, 9, 3, 9, 7, 11, 12
2, 6, 9, 3, 7, 9, 11, 12
6, 2, 9, 11, 9, 3, 7, 12
2, 6, 9, 9, 3, 7, 11, 12
Second Pass
First Pass
Third Pass
This time the 11 and 12 are in position. This pass therefore only requires 5
comparisons.
25. BUBBLE SORT EXAMPLE
2, 6, 9, 3, 7, 9, 11, 12
2, 6, 3, 9, 7, 9, 11, 12
2, 6, 3, 7, 9, 9, 11, 12
6, 2, 9, 11, 9, 3, 7, 12
2, 6, 9, 9, 3, 7, 11, 12
Second Pass
First Pass
Third Pass
Each pass requires fewer comparisons. This time only 4 are needed.
2, 6, 9, 3, 7, 9, 11, 12
Fourth Pass
26. BUBBLE SORT EXAMPLE
2, 6, 3, 7, 9, 9, 11, 12
2, 3, 6, 7, 9, 9, 11, 12
6, 2, 9, 11, 9, 3, 7, 12
2, 6, 9, 9, 3, 7, 11, 12
Second Pass
First Pass
Third Pass
The list is now sorted but the algorithm does not know this until it completes a
pass with no exchanges.
2, 6, 9, 3, 7, 9, 11, 12
Fourth Pass
2, 6, 3, 7, 9, 9, 11, 12
Fifth Pass
27. INSERTION SORT
It is simple as the bubble sort but it is almost
twice as efficient as the bubble sort
It is relatively simple and easy to implement
It is inefficient for large lists
28. INSERTION SORT
In insertion sorting, the list or array is scanned from the
beginning to the end
In each iteration, one element is inserted into its correct
position relative to the previously sorted elements of the list
The array elements are not swapped or interchanged
They are shifted towards the right of the list to make room
for the new element to be inserted
29. INSERTION SORT
Given an unsorted list.
Partition the list into two regions: sorted &
unsorted.
At each step, take the first item from
unsorted and place it into its correct
position.
Also requires to shift the remaining items
to make a room for the inserted item.
30. INSERTION SORT
Suppose the name of the array is A and it
has six elements with the following
values:
16 17 2 8 18 1
To sort this array in ascending order, six
iterations will be required.
31. INSERTION SORT
Iteration-1
A[1] is compared with itself and it is not shifted. The array A remains the
same
16 17 2 8 18 1
16 17 2 8 18 1
32. INSERTION SORT
Iteration-2
All data of elements on left of A[2] that are greater than A[2] are shifted
one position to the right to make room for A[2] to insert its data into the
correct location.
There is only one element with value 16 to the left of A[2]. Thus no
shifting takes place because 16 is less than 17. So A[1] and A[2] are in
correct position relative to each other. The array A remains same
16 17 2 8 18 1
16 17 2 8 18 1
33. INSERTION SORT
Iteration-3
All data of elements on left of A[3] that are greater than A[3] are shifted
one position to the right to make room for A[3] to insert its data into the
correct location.
There is two elements of left side of A[3] and both are greater than A[3].
Thus shift data A[1] & A[2] one position to right and insert the value of
A[3] at A[1]. The array A after shifting and inserting value is:
16 17 2 8 18 1
2 16 17 8 18 1
34. INSERTION SORT
Iteration-4
All data of elements on left of A[4] that are greater than A[4] are shifted
one position to the right to make room for A[4] to insert its data into the
correct location.
There is three elements of left side of A[4] and A[2] & A[3] are greater
than A[4]. Thus shift data A[2] & A[3] one position to right and insert the
value of A[4] at A[2]. The array A after shifting and inserting value is:
2 16 17 8 18 1
2 8 16 17 18 1
35. INSERTION SORT
Iteration-5
All data of elements on left of A[5] that are greater than A[5] are shifted
one position to the right to make room for A[5] to insert its data into the
correct location.
There is four elements of left side of A[5] and all are less than A[5]. Thus
no shifting & insertion takes place. The array A remains same:
2 8 16 17 18 1
2 8 16 17 18 1
36. INSERTION SORT
Iteration-6
All data of elements on left of A[6] that are greater than A[6] are shifted one
position to the right to make room for A[6] to insert its data into the correct
location.
There is five elements of left side of A[6] and all are greater than A[6]. Thus shift
data of each element from A[1] to A[5] one position to right and insert the value
of A[6] at A[1]. The array A after shifting and inserting value is:
2 8 16 17 18 1
1 2 8 16 17 18
37. ALGORITHM – INSERTION SORT
InsertionSort()
Algorithm to sort an array A consisting of N elements in ascending order
1. Start
2. Repeat step 3 to 8 For C = 2 to N
3. Set Temp = A[C]
4. Set L = C
5. Repeat Step 6 to 7 While (L>1 and Temp<=A[L-1])
6. Set A[L] = A[L-1]
7. L = L – 1
8. Set A[L] = Temp
9. Exit
38. INSERTION SORT CONT…..
• The insertion sort algorithm sorts the list by moving
each element to its proper place
Figure 6: Array list to be sorted
Figure 7: Sorted and unsorted portions of the array list
39. INSERTION SORT ALGORITHM (CONT’D)
Figure 8: Move list[4] into list[2]
Figure 9: Copy list[4] into temp
40. INSERTION SORT ALGORITHM (CONT’D)
Figure 10: Array list before copying list[3] into list[4], then
list[2] into list[3]
Figure 11: Array list after copying list[3] into list[4], and then
list[2] into list[3]
72. INSERTION SORT ALGORITHM
void insertionSort(int array[], int length)
{
int i, j, value;
for(i = 1; i < length; i++)
{
value = a[i];
for (j = i - 1; j >= 0 && a[ j ] > value; j--)
{
a[j + 1] = a[ j ];
}
a[j + 1] = value;
}
}
73. MERGE SORT
Merge sort is a sorting algorithm for
rearranging lists (or any other data
structure that can only be accessed
sequentially) into a specified order.
It is a particularly good example of the
divide and conquer algorithmic paradigm.
74. MERGE SORT
Conceptually, merge sort works as follows:
Divide the unsorted list into two sub-lists
of about half the size.
Sort each of the two sub-lists.
Merge the two sorted sub-lists back into
one sorted list.
75. MERGE SORT
Array mergeSort(Array m)
Array left, right.
if length(m) ≤ 1
return m
else
middle = length(m) / 2
for each x in m up to middle
add x to left
for each x in m after middle
add x to right
left = mergesort(left)
right = mergesort(right)
result = merge(left, right)
return result
76. MERGE SORT
Array merge(left,right)
Array result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append left to result
if length(right) > 0
append right to result
return result
87. MERGE SORT ANOTHER EXAMPLE
Suppose the name of the array is AB and
it has six elements with the following
values:
16 17 2 8 18 1
To sort this array in ascending order
88. MERGE SORT
AB
Divide array AB into two sub-arrays A & B
A B
Sort A & B using Bubble or selection or insertion sort
A B
16 17 2 8 18 1
16 17 2 8 18 1
2 16 17 1 8 18
89. MERGE SORT
A B
Compare A[1] to B[1], so B[1] is less than A[1], the value of B[1] is
move to AB[1]
AB
1
2 16 17 1 8 18
90. MERGE SORT
A B
Compare A[1] to B[2], so A[1] is less than B[2], the value of A[2] is
move to AB[2]
AB
1 2
2 16 17 1 8 18
91. MERGE SORT
A B
Compare A[2] to B[2], so B[2] is less than A[2], the value of B[2] is
move to AB[3]
AB
1 2 8
2 16 17 1 8 18
92. MERGE SORT
A B
Compare A[2] to B[3], so A[2] is less than B[3], the value of A[2] is
move to AB[4]
AB
1 2 8 16
2 16 17 1 8 18
93. MERGE SORT
A B
Compare A[3] to B[3], so A[3] is less than B[3], the value of A[3] is
move to AB[5]
AB
At the end, B[3] is move to AB[6], array is sorted
AB
1 2 8 16 17
2 16 17 1 8 18
1 2 8 16 17 18
94. ALGORITHM
Mergesort(Passed an array)
if array size > 1
Divide array in half
Call Mergesort on first half.
Call Mergesort on second half.
Merge two halves.
Merge(Passed two arrays)
Compare leading element in each array
Select lower and place in new array.
(If one input array is empty then place
remainder of other array in output array)
LB
136. QUICK SORT
Quick sort sorts by employing a divide and conquer
strategy to divide a list into two sub-lists.
The steps are:
Pick an element, called a pivot, from the list.
Reorder the list so that all elements which are less
than the pivot come before the pivot and so that all
elements greater than the pivot come after it (equal
values can go either way).
After this partitioning, the pivot is in its final
position. This is called the partition operation.
Recursively sort the sub-list of lesser elements and
the sub-list of greater elements.
137. QUICK SORT
Choose the appropriate pivot, either
randomly or near the median of the array
elements.
Avoid a pivot which makes either of the
two halves empty.
139. PICK PIVOT ELEMENT
There are a number of ways to pick the pivot element.
In this example, we will use the first element in the
array:
40 20 10 80 60 50 7 30 100
140. PARTITIONING ARRAY
Given a pivot, partition the elements of the array
such that the resulting array consists of:
1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot
The sub-arrays are stored in the original data
array.
Partitioning loops through, swapping elements
below/above pivot.
168. QUICK SORT
function quicksort(list q)
list low, pivotList, hi
if length(q) ≤ 1
return q
select a pivot value from q
for each x in q except the pivot element
if x < pivot then add x to low
if x ≥ pivot then add x to high
add pivot to pivotList
return concatenate(quicksort(less), pivotList,
quicksort(greater))