SlideShare une entreprise Scribd logo
1  sur  14
Java 5 Concurrency

1.1 Locks

Before Java 5 concurrency was achieved using synchronized locks and
wait/notify idiom. Synchronization is a locking mechanism where a block of
code or method is protected by a software lock. Any thread that wants to
execute this block of code must first acquire the lock. The lock is released
once the thread exits the synchronization block or method. Acquiring of lock
and releasing it is done by compiler thus relieving programmer of lock book-
keeping. However, there are some draw backs while using synchronization as
you will see below.

Wait/Notify idiom allows a thread to wait for a signal from other thread.
Wait can be timed or can be interrupted (using Thread.interrupt()).
Wait is signaled using notify.


1.1.1       Drawbacks of Synchronization

     No Back-off: Once a thread enters a synchronization block or
     method, it has to wait till lock is available. It cannot back off
     to execute other instructions if lock is not available or it is
     taking very long time to get the lock.

     No Read-Only Access: Multiple threads cannot acquire lock even if
     read-only access is required.

     Compile Time: Code synchronization is compile time decision.
     Synchronization cannot be turned off because of run-time
     conditions. To enable this, a lot of code duplication is
     required.

     No Metadata: Lock metadata like number of threads waiting for
     lock, average time to acquire lock is not available in java
     program.

1.1.2       Lock Interface

As of Java 5, Lock interface implementations can be used instead of
synchronization.

When a thread creates lock object, memory synchronization with cache occurs.
This behavior is similar to entering a synchronized block or method.

Lock interface has methods to lock and lock interruptibly.
Lock Interruptibly: This method acquires the lock unless the thread is interrupted
by another thread. On calling this method, if lock is available it is
acquired. If lock is not available, the thread becomes dormant and waits for
lock to be available. If some other thread calls interrupt on this thread,
then interrupted exception is thrown.

tryLock: This methods immediately acquires the lock and returns true if lock is
available. If lock is not available, it returns false.


1.1.3       Lock Implementation

1.1.3.1 ReentrantLock
Reentrant Lock is an implementation of Lock Interface. It allows a thread to
re-enter the code that is protected by this lock object.

It has additional methods that return the state of the lock and other meta
information.

Reentrant Lock can be created with fair parameter. Lock is then acquired by
thread in arrival order.




1.1.3.2 ReentrantReadWriteLock
This is an implementation of ReadWriteLock. It hold a pair of associated
locks one for read only operations and other for write only operations.
The corresponding locks are readlock and writelock. Read locks can be shared
by multiple readers. Write lock is exclusive, i.e., write lock can be granted
to only one writer thread when no reader thread has a read lock.

A reader thread is one that performs a read operation. A writer thread
performs write operations.

When fair is true, the locks are granted based on arrival order of threads.


Lock Downgrading

Lock downgrading is allowed, i.e., if a thread holds write lock, it can then
hold a read lock and then release write lock

ReentrantReadWriteLock l = ne ReentrantReadWriteLock();
l.writelock().lock();
l.readLock.lock();
l.writelock.unlock();
Lock Upgrading

Lock Upgrading is not allowed, ie. if a thread holds read lock then it cannot
hold write lock without releasing read lock.

RenetrantReadWriteLock l = new ReentrantReadWriteLock();
l.readLock().lock();
//process..
l.readLock.unlock();// first unlock then acquire write lock
l.writeLock().lock();


Concurrency Improvement

When there are large numbers of reader threads and small number of writer
threads, readwritelock will improve concurrency.




1.1.4       Typical Lock Usage

public class LockedMap {

    private Lock l = new ReentrantLock();
    Map myMap = new HashMap();

    public Object get(Object key){
      l.lock();
      try {
            return myMap.get(key);
      } finally {
            l.unlock();
      }
    }

    public void put(Object key, Object val) {
      l.lock();
      try{
            myMap.put(key,val);
      }finally{
            l.unlock();
      }
    }

}
1.2 Condition

Condition interface factors out object monitor methods wait, notify,
notifyall into separate class. Condition objects are intrinsically bound to a
lock and can be obtained by calling newCondition() on lock instance.

When lock replaces synchronized methods and blocks, conditions replace object
monitor methods.

Conditions are also called condition variables or condition queues.

On the same lock object multiple condition variables can be created.
Different set of threads can wait on different condition variable. A classic
usage is in producer and consumers of a bounded buffer.

public class BoundedBuffer {

   final Object[] buffer = new Object[10];
   Lock l = new ReentrantLock();
   Condition producer = l.newCondition();
   Condition consumer = l.newCondition();
   int bufferCount, putIdx, getIdx;

   public void put(Object x) throws InterruptedException{
     l.lock();
     try {
           while (bufferCount == buffer.length)
                 producer.await();

             buffer[putIdx++] = x;
                   if (putIdx == buffer.length)
                         putIdx = 0;
                   ++bufferCount;
                   consumer.signal();



       } finally {
             l.unlock();
       }
   }

   public Object get() throws InterruptedException {
     l.lock();
     try{
           while (bufferCount==0)
                 consumer.await();
Object x = buffer[getIdx++];
                       if(getIdx==buffer.length)
                             getIdx=0;
                       --bufferCount;
                       producer.signal();

                       return x;



           }finally{
                 l.unlock();
           }
      }

}


Await UnInterruptibly

This method on condition variable causes the thread to wait until a signal is
executed on that variable.


IllegalMonitorStateException

The thread calling methods on condition variable should hold the
corresponding lock. If it doesn’t then illegalmonitorstateexception is
thrown.




1.3       Atomic Variables
Atomic variables are used for lock free, thread safe programming on single
variables.
As case with volatile variables, atomic variables are never cached locally.
They are always synced with main memory.


CompareAndSet

Atomic variables use compare and swap (CAS) primitive of processors. CAS has
three operands a memory location (V), expected old value of memory location
(A), new value of memory location (B). If current value of memory location
matches the expected old value(A), then new value (B) is written to memory
location (V) and true is returned. In case the current value is different
from expected old value, then memory is not updated and false is return.

Code logic can retry this operation if false is returned.
Below code shows CAS algorithm. However, the actual implementation is in
hardware for processors that support CAS. For processors that do not support
CAS, locking as shown below is done to simulate CAS.

public class SimulatedCAS {

private int value;

public synchronized int getValue() { return value; }



boolean synchronized comapreAndSet(expectedVaue, updateValue) {
    bool set=false;
if(value==expectedvalue) {
    value=newvalue;
    set=true;




}else {
set=false
}
return set;
}

}




1.4 Data Structures

1.4.1           Blocking Queue

It is a queue data structure with additional features like consumers of queue
wait/block when queue is empty and producers wait/block when queue is full.

Queue implementations can guarantee fairness where in longest waiting
consumer/producer get the first chance to access the queue.

Below code depicts producer, consumer using blocking queue

    public class Producer implements Runnable {
         private final BlockingQueue q;

         public Producer(BlockingQueue q) {
                super();
                this.q = q;
         }
@Override
      public void run() {
             try {
                    while(true) {
                           q.put(produce());
                    }
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }

      }

      private Object produce() {
             // TODO Auto-generated method stub
             return new Object();
      }




}



public class Consumer implements Runnable {
       private final BlockingQueue q;

      public Consumer(BlockingQueue q) {
             super();
             this.q = q;
      }

      @Override
      public void run() {
             try {
                    while(true) {
                           consume(q.take());
                    }
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }

      }

      private void consume(Object take) {
             // TODO Auto-generated method stub

      }



}
public class Setup {

      /**
       * @param args
       */
      public static void main(String[] args) {
             BlockingQueue q = new ArrayBlockingQueue(10,true);
             Producer p = new Producer(q);
             new Thread(p).start();
             new Thread(new Consumer(q)).start();
             new Thread(new Consumer(q)).start();

      }

}




1.4.2        ConcurrentHashMap

Concurrent Has Map is a thread safe hash map but does not block all get and
put operations as synchronized version of hash map does. It allows full
concurrency of gets and expected concurrency of puts.

Concurrent Hash Map internally divides its storage in bins. The entries in
bin are connected by link list.
Nulls are not allowed in key and value.

get operation generally does not entail locking. But algorithm checks if
return value is null the bin (segment) is first locked and then the value is
fetched. A value can be null because of compiler reordering of instructions.

put operations are performed by locking that particular bin (segment).




1.5 Synchronizers

Synchronizers control the flow of execution in one or more threads.


1.5.1        Semaphore

A counting semaphore is used to restrict number of threads that can access a
physical or logical resource. Semaphore maintains a set of permits. Each call
to acquire consumes a permit, possibly blocking if permit is not available.
Each call to release(), releases a permit also signals a waiting acquirer.
Usage:
A library has N seats and thus allows only N members at one time to use it.
If all seats are occupied, then arriving members wait for the seat to get
vacant. Design a model for the library.

package com.concur.semaphore;

import java.util.concurrent.Semaphore;

public class Library {

      private final Semaphore s = new Semaphore(50, true);


      public void enter() throws InterruptedException {
             s.acquire();
      }

      public void exit() throws InterruptedException {
             s.release();
      }

      public void borrowBooks(int id){
             //implementation
      }

      public void returnBook(int id){
             //implementaion
      }

      public static void main(String args) {
             Library l = new Library();
             try {
                    l.enter();
                    l.borrowBooks(1234);
                    l.exit();
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }
      }

}




1.5.2        Mutex

Mutex is a counting semaphore with only one permit. They have lot in common
with locks. The difference is that in mutex some other thread can call a
release than the one holding the permit.
1.5.3        Cyclic Barrier

In cyclic barrier, each threads come to a barrier and wait there till all
threads have reached the barrier. Once all threads reach barrier, they are
released for further processing. Optionally a method can be called before
threads are released.



package com.concur.cyclic;

import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;

public class Barrier {
       final int num_threads;

      final CyclicBarrier cb;

      final boolean complete=false;

      public Barrier(int n) {
             num_threads=n;
             cb= new CyclicBarrier(num_threads, new Runnable(){

                   @Override
                   public void run() {
                          System.out.println("All threads reached barrier");
                          //check if complete and set complete

                   }});
      }

      public void process() throws InterruptedException, BrokenBarrierException {
             while(!complete) {
                    //process
                    cb.await();
                    //exits if process completed else loops
             }
      }

}




1.5.4        Countdown Latch

Count down latch is similar to cyclic barrier but differs in way the treads
are released. In cyclic barrier, threads are released automatically when all
threads reach barrier. In countdown latch that is initialized by N, threads
are released when countdown has been called N times. Any call to await block
threads if N!=0. Once N reaches 0, await() returns immediately.
Countdown latches cannot be reused. Once N reaches 0, all await() return
immediately.

package com.concur.countdown;

import java.util.concurrent.CountDownLatch;

public class Latch {

      private class Worker implements Runnable {

             final CountDownLatch start, done;

             public Worker(CountDownLatch start, CountDownLatch done) {
                    this.start = start;
                    this.done = done;
                    // TODO Auto-generated constructor stub
             }

             @Override
             public void run() {
                    try {
                           start.await();
                           // do process
                           done.countDown();
                    } catch (InterruptedException e) {
                           // TODO Auto-generated catch block
                           e.printStackTrace();
                    }

             }

      }

      public static void main(String[] args) {
             Latch l= new Latch();
             CountDownLatch start = new CountDownLatch(1);
             CountDownLatch done = new CountDownLatch(10);

             for (int i = 0; i < 10; i++) {
                    new Thread(l. new Worker(start, done)).start();
             }

             try {
                     //do something
                     start.countDown();

                     //do something

                    done.await();
             } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
             }
}
}


1.6 Executor Framework

Executor framework has API to create thread pool and submit tasks to be
executed by thread pools.

Executor interface has only one method execute that takes a runnable object.

Executor thread pools can be created by calling factory methods
Executors.newCachedThreadPool(): If thread is available, ot will be used else
new thread will be created. Threads not used for 60 seconds will be removed
fro cache.

Executors.newFixedThreadPool(n); N threads are created and added to the pool.
The tasks are stored in unbounded queue and pool threads pick up tasks from
the queue. If thread terminates due to failure, new thread is created and
added to pool.

Executor.newSingleThredExecutor(): A pool of single thread.




Executor Usage:


package com.concur.executor;

import       java.io.IOException;
import       java.net.ServerSocket;
import       java.net.Socket;
import       java.util.concurrent.Executor;
import       java.util.concurrent.Executors;

public class WebServer {

    Executor pool = Executors.newFixedThreadPool(50);

    public static void main(String[] args) throws IOException{
      WebServer ws = new WebServer();
      ServerSocket ssocket= new ServerSocket(80);
      while(true) {
            Socket soc = ssocket.accept();
            Runnable r = new Runnable(){
@Override
                       public void run() {
                             handle(soc);

                       }

                };

          ws.pool.execute(r);



          }
      }

}

1.7       Future

Future represents a task and serves as a wrapper for the tasks. The task may
not have started execution, or may be executing or may have completed. The
result of the task can be obtained by calling future.get().
future.get() returns immediately if task is completed, else it blocks till
the task gets completed.

FutureTask is an implementation of future interface. It also implements
runnable interface and allows the task to be submitted to
executor.execute(Runnable r).

Bow code snippet shows usage of future task class to implement a thread safe
cache.

package com.concur.cache;

import    java.util.concurrent.Callable;
import    java.util.concurrent.ConcurrentHashMap;
import    java.util.concurrent.ConcurrentMap;
import    java.util.concurrent.ExecutionException;
import    java.util.concurrent.Executor;
import    java.util.concurrent.Executors;
import    java.util.concurrent.FutureTask;

public class SimpleCache <K, V> {
       private ConcurrentMap<K, FutureTask<V>> cache = new ConcurrentHashMap();
       Executor pool = Executors.newFixedThreadPool(10);

          public V get(final K key) throws InterruptedException, ExecutionException {
                 FutureTask<V> val = cache.get(key);
                 if(val==null) {
                        Callable<V> c = new Callable<V>(){
                               @Override
public V call() {
                                  System.out.println("Cache Miss");
                                  return (V)new Integer(key.hashCode());

                           }

                      };

                    val = new FutureTask<V>(c);
                    FutureTask<V> oldVal=cache.putIfAbsent(key, val);
                    if(oldVal==null){
                           //this thread should execute future task to get the actual
cache value associated with key
                           pool.execute(val);
                    }else {
                           //assign val to oldVal as other thread has won the race to
store its future task in concurrent map
                           val=oldVal;
                    }

             }else{
                    System.out.println("Cache Hit");
             }
             return val.get();
      }

       public static void main(String[] args) {
              try {
                     SimpleCache<String,Integer > sc = new
SimpleCache<String,Integer>();
                     System.out.println(sc.get("Hello"));
                     System.out.println(sc.get("Hello"));
              } catch (InterruptedException e) {
                     // TODO Auto-generated catch block
                     e.printStackTrace();
              } catch (ExecutionException e) {
                     // TODO Auto-generated catch block
                     e.printStackTrace();
              }
       }
}

Contenu connexe

Tendances

Inter threadcommunication.38
Inter threadcommunication.38Inter threadcommunication.38
Inter threadcommunication.38
myrajendra
 
Java concurrency begining
Java concurrency   beginingJava concurrency   begining
Java concurrency begining
maksym220889
 
Servletand sessiontracking
Servletand sessiontrackingServletand sessiontracking
Servletand sessiontracking
vamsi krishna
 
Other Approaches (Concurrency)
Other Approaches (Concurrency)Other Approaches (Concurrency)
Other Approaches (Concurrency)
Sri Prasanna
 
Locks (Concurrency)
Locks (Concurrency)Locks (Concurrency)
Locks (Concurrency)
Sri Prasanna
 

Tendances (20)

Java Concurrency in Practice
Java Concurrency in PracticeJava Concurrency in Practice
Java Concurrency in Practice
 
The Java memory model made easy
The Java memory model made easyThe Java memory model made easy
The Java memory model made easy
 
Java synchronizers
Java synchronizersJava synchronizers
Java synchronizers
 
Actor Concurrency
Actor ConcurrencyActor Concurrency
Actor Concurrency
 
Thread syncronization
Thread syncronizationThread syncronization
Thread syncronization
 
Java Concurrency, Memory Model, and Trends
Java Concurrency, Memory Model, and TrendsJava Concurrency, Memory Model, and Trends
Java Concurrency, Memory Model, and Trends
 
Qt Framework Events Signals Threads
Qt Framework Events Signals ThreadsQt Framework Events Signals Threads
Qt Framework Events Signals Threads
 
Java concurrency
Java concurrencyJava concurrency
Java concurrency
 
Inter threadcommunication.38
Inter threadcommunication.38Inter threadcommunication.38
Inter threadcommunication.38
 
Clojure concurrency
Clojure concurrencyClojure concurrency
Clojure concurrency
 
Inter thread communication &amp; runnable interface
Inter thread communication &amp; runnable interfaceInter thread communication &amp; runnable interface
Inter thread communication &amp; runnable interface
 
Concurrency Programming in Java - 07 - High-level Concurrency objects, Lock O...
Concurrency Programming in Java - 07 - High-level Concurrency objects, Lock O...Concurrency Programming in Java - 07 - High-level Concurrency objects, Lock O...
Concurrency Programming in Java - 07 - High-level Concurrency objects, Lock O...
 
04 threads
04 threads04 threads
04 threads
 
Concurrency in Java
Concurrency in  JavaConcurrency in  Java
Concurrency in Java
 
Byte code field report
Byte code field reportByte code field report
Byte code field report
 
Java concurrency begining
Java concurrency   beginingJava concurrency   begining
Java concurrency begining
 
Servletand sessiontracking
Servletand sessiontrackingServletand sessiontracking
Servletand sessiontracking
 
Non blocking io with netty
Non blocking io with nettyNon blocking io with netty
Non blocking io with netty
 
Other Approaches (Concurrency)
Other Approaches (Concurrency)Other Approaches (Concurrency)
Other Approaches (Concurrency)
 
Locks (Concurrency)
Locks (Concurrency)Locks (Concurrency)
Locks (Concurrency)
 

Similaire à Java 5 concurrency

Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Subhajit Sahu
 
Fault tolerance made easy
Fault tolerance made easyFault tolerance made easy
Fault tolerance made easy
Uwe Friedrichsen
 

Similaire à Java 5 concurrency (20)

Introduction+To+Java+Concurrency
Introduction+To+Java+ConcurrencyIntroduction+To+Java+Concurrency
Introduction+To+Java+Concurrency
 
JAVA CONCEPTS AND PRACTICES
JAVA CONCEPTS AND PRACTICESJAVA CONCEPTS AND PRACTICES
JAVA CONCEPTS AND PRACTICES
 
Jist of Java
Jist of JavaJist of Java
Jist of Java
 
Java util concurrent
Java util concurrentJava util concurrent
Java util concurrent
 
Concurrency gotchas
Concurrency gotchasConcurrency gotchas
Concurrency gotchas
 
Core Java Programming Language (JSE) : Chapter XII - Threads
Core Java Programming Language (JSE) : Chapter XII -  ThreadsCore Java Programming Language (JSE) : Chapter XII -  Threads
Core Java Programming Language (JSE) : Chapter XII - Threads
 
chap 7 : Threads (scjp/ocjp)
chap 7 : Threads (scjp/ocjp)chap 7 : Threads (scjp/ocjp)
chap 7 : Threads (scjp/ocjp)
 
concurrency_c#_public
concurrency_c#_publicconcurrency_c#_public
concurrency_c#_public
 
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
 
Lec7!JavaThreads.ppt
Lec7!JavaThreads.pptLec7!JavaThreads.ppt
Lec7!JavaThreads.ppt
 
Lec7!JavaThreads.ppt java multithreading
Lec7!JavaThreads.ppt java multithreadingLec7!JavaThreads.ppt java multithreading
Lec7!JavaThreads.ppt java multithreading
 
Lec7!JavaThreads.ppt
Lec7!JavaThreads.pptLec7!JavaThreads.ppt
Lec7!JavaThreads.ppt
 
Concurrency
ConcurrencyConcurrency
Concurrency
 
Thread
ThreadThread
Thread
 
Multithreading Presentation
Multithreading PresentationMultithreading Presentation
Multithreading Presentation
 
Java Multithreading.pptx
Java Multithreading.pptxJava Multithreading.pptx
Java Multithreading.pptx
 
Concurrent Programming in Java
Concurrent Programming in JavaConcurrent Programming in Java
Concurrent Programming in Java
 
Fault tolerance made easy
Fault tolerance made easyFault tolerance made easy
Fault tolerance made easy
 
Multithreading in Java
Multithreading in JavaMultithreading in Java
Multithreading in Java
 
Android Loaders : Reloaded
Android Loaders : ReloadedAndroid Loaders : Reloaded
Android Loaders : Reloaded
 

Dernier

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 

Dernier (20)

Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 

Java 5 concurrency

  • 1. Java 5 Concurrency 1.1 Locks Before Java 5 concurrency was achieved using synchronized locks and wait/notify idiom. Synchronization is a locking mechanism where a block of code or method is protected by a software lock. Any thread that wants to execute this block of code must first acquire the lock. The lock is released once the thread exits the synchronization block or method. Acquiring of lock and releasing it is done by compiler thus relieving programmer of lock book- keeping. However, there are some draw backs while using synchronization as you will see below. Wait/Notify idiom allows a thread to wait for a signal from other thread. Wait can be timed or can be interrupted (using Thread.interrupt()). Wait is signaled using notify. 1.1.1 Drawbacks of Synchronization No Back-off: Once a thread enters a synchronization block or method, it has to wait till lock is available. It cannot back off to execute other instructions if lock is not available or it is taking very long time to get the lock. No Read-Only Access: Multiple threads cannot acquire lock even if read-only access is required. Compile Time: Code synchronization is compile time decision. Synchronization cannot be turned off because of run-time conditions. To enable this, a lot of code duplication is required. No Metadata: Lock metadata like number of threads waiting for lock, average time to acquire lock is not available in java program. 1.1.2 Lock Interface As of Java 5, Lock interface implementations can be used instead of synchronization. When a thread creates lock object, memory synchronization with cache occurs. This behavior is similar to entering a synchronized block or method. Lock interface has methods to lock and lock interruptibly.
  • 2. Lock Interruptibly: This method acquires the lock unless the thread is interrupted by another thread. On calling this method, if lock is available it is acquired. If lock is not available, the thread becomes dormant and waits for lock to be available. If some other thread calls interrupt on this thread, then interrupted exception is thrown. tryLock: This methods immediately acquires the lock and returns true if lock is available. If lock is not available, it returns false. 1.1.3 Lock Implementation 1.1.3.1 ReentrantLock Reentrant Lock is an implementation of Lock Interface. It allows a thread to re-enter the code that is protected by this lock object. It has additional methods that return the state of the lock and other meta information. Reentrant Lock can be created with fair parameter. Lock is then acquired by thread in arrival order. 1.1.3.2 ReentrantReadWriteLock This is an implementation of ReadWriteLock. It hold a pair of associated locks one for read only operations and other for write only operations. The corresponding locks are readlock and writelock. Read locks can be shared by multiple readers. Write lock is exclusive, i.e., write lock can be granted to only one writer thread when no reader thread has a read lock. A reader thread is one that performs a read operation. A writer thread performs write operations. When fair is true, the locks are granted based on arrival order of threads. Lock Downgrading Lock downgrading is allowed, i.e., if a thread holds write lock, it can then hold a read lock and then release write lock ReentrantReadWriteLock l = ne ReentrantReadWriteLock(); l.writelock().lock(); l.readLock.lock(); l.writelock.unlock();
  • 3. Lock Upgrading Lock Upgrading is not allowed, ie. if a thread holds read lock then it cannot hold write lock without releasing read lock. RenetrantReadWriteLock l = new ReentrantReadWriteLock(); l.readLock().lock(); //process.. l.readLock.unlock();// first unlock then acquire write lock l.writeLock().lock(); Concurrency Improvement When there are large numbers of reader threads and small number of writer threads, readwritelock will improve concurrency. 1.1.4 Typical Lock Usage public class LockedMap { private Lock l = new ReentrantLock(); Map myMap = new HashMap(); public Object get(Object key){ l.lock(); try { return myMap.get(key); } finally { l.unlock(); } } public void put(Object key, Object val) { l.lock(); try{ myMap.put(key,val); }finally{ l.unlock(); } } }
  • 4. 1.2 Condition Condition interface factors out object monitor methods wait, notify, notifyall into separate class. Condition objects are intrinsically bound to a lock and can be obtained by calling newCondition() on lock instance. When lock replaces synchronized methods and blocks, conditions replace object monitor methods. Conditions are also called condition variables or condition queues. On the same lock object multiple condition variables can be created. Different set of threads can wait on different condition variable. A classic usage is in producer and consumers of a bounded buffer. public class BoundedBuffer { final Object[] buffer = new Object[10]; Lock l = new ReentrantLock(); Condition producer = l.newCondition(); Condition consumer = l.newCondition(); int bufferCount, putIdx, getIdx; public void put(Object x) throws InterruptedException{ l.lock(); try { while (bufferCount == buffer.length) producer.await(); buffer[putIdx++] = x; if (putIdx == buffer.length) putIdx = 0; ++bufferCount; consumer.signal(); } finally { l.unlock(); } } public Object get() throws InterruptedException { l.lock(); try{ while (bufferCount==0) consumer.await();
  • 5. Object x = buffer[getIdx++]; if(getIdx==buffer.length) getIdx=0; --bufferCount; producer.signal(); return x; }finally{ l.unlock(); } } } Await UnInterruptibly This method on condition variable causes the thread to wait until a signal is executed on that variable. IllegalMonitorStateException The thread calling methods on condition variable should hold the corresponding lock. If it doesn’t then illegalmonitorstateexception is thrown. 1.3 Atomic Variables Atomic variables are used for lock free, thread safe programming on single variables. As case with volatile variables, atomic variables are never cached locally. They are always synced with main memory. CompareAndSet Atomic variables use compare and swap (CAS) primitive of processors. CAS has three operands a memory location (V), expected old value of memory location (A), new value of memory location (B). If current value of memory location matches the expected old value(A), then new value (B) is written to memory location (V) and true is returned. In case the current value is different from expected old value, then memory is not updated and false is return. Code logic can retry this operation if false is returned.
  • 6. Below code shows CAS algorithm. However, the actual implementation is in hardware for processors that support CAS. For processors that do not support CAS, locking as shown below is done to simulate CAS. public class SimulatedCAS { private int value; public synchronized int getValue() { return value; } boolean synchronized comapreAndSet(expectedVaue, updateValue) { bool set=false; if(value==expectedvalue) { value=newvalue; set=true; }else { set=false } return set; } } 1.4 Data Structures 1.4.1 Blocking Queue It is a queue data structure with additional features like consumers of queue wait/block when queue is empty and producers wait/block when queue is full. Queue implementations can guarantee fairness where in longest waiting consumer/producer get the first chance to access the queue. Below code depicts producer, consumer using blocking queue public class Producer implements Runnable { private final BlockingQueue q; public Producer(BlockingQueue q) { super(); this.q = q; }
  • 7. @Override public void run() { try { while(true) { q.put(produce()); } } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private Object produce() { // TODO Auto-generated method stub return new Object(); } } public class Consumer implements Runnable { private final BlockingQueue q; public Consumer(BlockingQueue q) { super(); this.q = q; } @Override public void run() { try { while(true) { consume(q.take()); } } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private void consume(Object take) { // TODO Auto-generated method stub } }
  • 8. public class Setup { /** * @param args */ public static void main(String[] args) { BlockingQueue q = new ArrayBlockingQueue(10,true); Producer p = new Producer(q); new Thread(p).start(); new Thread(new Consumer(q)).start(); new Thread(new Consumer(q)).start(); } } 1.4.2 ConcurrentHashMap Concurrent Has Map is a thread safe hash map but does not block all get and put operations as synchronized version of hash map does. It allows full concurrency of gets and expected concurrency of puts. Concurrent Hash Map internally divides its storage in bins. The entries in bin are connected by link list. Nulls are not allowed in key and value. get operation generally does not entail locking. But algorithm checks if return value is null the bin (segment) is first locked and then the value is fetched. A value can be null because of compiler reordering of instructions. put operations are performed by locking that particular bin (segment). 1.5 Synchronizers Synchronizers control the flow of execution in one or more threads. 1.5.1 Semaphore A counting semaphore is used to restrict number of threads that can access a physical or logical resource. Semaphore maintains a set of permits. Each call to acquire consumes a permit, possibly blocking if permit is not available. Each call to release(), releases a permit also signals a waiting acquirer.
  • 9. Usage: A library has N seats and thus allows only N members at one time to use it. If all seats are occupied, then arriving members wait for the seat to get vacant. Design a model for the library. package com.concur.semaphore; import java.util.concurrent.Semaphore; public class Library { private final Semaphore s = new Semaphore(50, true); public void enter() throws InterruptedException { s.acquire(); } public void exit() throws InterruptedException { s.release(); } public void borrowBooks(int id){ //implementation } public void returnBook(int id){ //implementaion } public static void main(String args) { Library l = new Library(); try { l.enter(); l.borrowBooks(1234); l.exit(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } 1.5.2 Mutex Mutex is a counting semaphore with only one permit. They have lot in common with locks. The difference is that in mutex some other thread can call a release than the one holding the permit.
  • 10. 1.5.3 Cyclic Barrier In cyclic barrier, each threads come to a barrier and wait there till all threads have reached the barrier. Once all threads reach barrier, they are released for further processing. Optionally a method can be called before threads are released. package com.concur.cyclic; import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.CyclicBarrier; public class Barrier { final int num_threads; final CyclicBarrier cb; final boolean complete=false; public Barrier(int n) { num_threads=n; cb= new CyclicBarrier(num_threads, new Runnable(){ @Override public void run() { System.out.println("All threads reached barrier"); //check if complete and set complete }}); } public void process() throws InterruptedException, BrokenBarrierException { while(!complete) { //process cb.await(); //exits if process completed else loops } } } 1.5.4 Countdown Latch Count down latch is similar to cyclic barrier but differs in way the treads are released. In cyclic barrier, threads are released automatically when all threads reach barrier. In countdown latch that is initialized by N, threads are released when countdown has been called N times. Any call to await block threads if N!=0. Once N reaches 0, await() returns immediately.
  • 11. Countdown latches cannot be reused. Once N reaches 0, all await() return immediately. package com.concur.countdown; import java.util.concurrent.CountDownLatch; public class Latch { private class Worker implements Runnable { final CountDownLatch start, done; public Worker(CountDownLatch start, CountDownLatch done) { this.start = start; this.done = done; // TODO Auto-generated constructor stub } @Override public void run() { try { start.await(); // do process done.countDown(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } public static void main(String[] args) { Latch l= new Latch(); CountDownLatch start = new CountDownLatch(1); CountDownLatch done = new CountDownLatch(10); for (int i = 0; i < 10; i++) { new Thread(l. new Worker(start, done)).start(); } try { //do something start.countDown(); //do something done.await(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }
  • 12. } } 1.6 Executor Framework Executor framework has API to create thread pool and submit tasks to be executed by thread pools. Executor interface has only one method execute that takes a runnable object. Executor thread pools can be created by calling factory methods Executors.newCachedThreadPool(): If thread is available, ot will be used else new thread will be created. Threads not used for 60 seconds will be removed fro cache. Executors.newFixedThreadPool(n); N threads are created and added to the pool. The tasks are stored in unbounded queue and pool threads pick up tasks from the queue. If thread terminates due to failure, new thread is created and added to pool. Executor.newSingleThredExecutor(): A pool of single thread. Executor Usage: package com.concur.executor; import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; import java.util.concurrent.Executor; import java.util.concurrent.Executors; public class WebServer { Executor pool = Executors.newFixedThreadPool(50); public static void main(String[] args) throws IOException{ WebServer ws = new WebServer(); ServerSocket ssocket= new ServerSocket(80); while(true) { Socket soc = ssocket.accept(); Runnable r = new Runnable(){
  • 13. @Override public void run() { handle(soc); } }; ws.pool.execute(r); } } } 1.7 Future Future represents a task and serves as a wrapper for the tasks. The task may not have started execution, or may be executing or may have completed. The result of the task can be obtained by calling future.get(). future.get() returns immediately if task is completed, else it blocks till the task gets completed. FutureTask is an implementation of future interface. It also implements runnable interface and allows the task to be submitted to executor.execute(Runnable r). Bow code snippet shows usage of future task class to implement a thread safe cache. package com.concur.cache; import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ExecutionException; import java.util.concurrent.Executor; import java.util.concurrent.Executors; import java.util.concurrent.FutureTask; public class SimpleCache <K, V> { private ConcurrentMap<K, FutureTask<V>> cache = new ConcurrentHashMap(); Executor pool = Executors.newFixedThreadPool(10); public V get(final K key) throws InterruptedException, ExecutionException { FutureTask<V> val = cache.get(key); if(val==null) { Callable<V> c = new Callable<V>(){ @Override
  • 14. public V call() { System.out.println("Cache Miss"); return (V)new Integer(key.hashCode()); } }; val = new FutureTask<V>(c); FutureTask<V> oldVal=cache.putIfAbsent(key, val); if(oldVal==null){ //this thread should execute future task to get the actual cache value associated with key pool.execute(val); }else { //assign val to oldVal as other thread has won the race to store its future task in concurrent map val=oldVal; } }else{ System.out.println("Cache Hit"); } return val.get(); } public static void main(String[] args) { try { SimpleCache<String,Integer > sc = new SimpleCache<String,Integer>(); System.out.println(sc.get("Hello")); System.out.println(sc.get("Hello")); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (ExecutionException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }